CHUQUR NEYRON TARMOQLARNING BIOTIBBIYOT TASVIRLARINI TAHLIL QILISHDAGI RAQOBATBARDOSH HUJUMLARGA CHIDAMLILIGINI O‘RGANISH

CHUQUR NEYRON TARMOQLARNING BIOTIBBIYOT TASVIRLARINI TAHLIL QILISHDAGI RAQOBATBARDOSH HUJUMLARGA CHIDAMLILIGINI O‘RGANISH

Authors

  • Boltibayev Shuxratjon Komiljanovich Namangan davlat universiteti dotsenti Email: sh.boltibayev@gmail.com
  • Axmedov Doniyor Akmaljon o‘g‘li Toshkent Kimyo Xalqaro Universiteti Namangan filiali magistranti Email: axmedovdoniyor23@gmail.com

Keywords:

Chuqur o‘rganish, raqobatli hujumlar, biotibbiy tasvirlar, neyron tarmoqlar turg‘unligi, PGD, DeepFool, CW algoritmi, kiber-bardoshlilik.

Abstract

Ushbu maqolada zamonaviy tibbiy diagnostika tizimlarining asosini tashkil etuvchi chuqur konvolyutsion neyron tarmoqlarining (CNN) raqobatli hujumlarga nisbatan zaif tomonlari o‘rganilgan. Tadqiqot davomida turli tuzilishga ega bo‘lgan modellar (jumladan, InceptionV3, ResNet50, DenseNet121) biotibbiyot ma’lumotlar to‘plamlari asosida sinovdan o‘tkazilgan. Maqolada PGD, DeepFool va CW kabi asosiy hujum usullarining samaradorligi qiyosiy tahlil qilingan hamda yuqori aniqlikdagi modellarning o‘zi ham inson ko‘zi sezmaydigan juda kichik o‘zgartirishlar (perturbatsiyalar) tufayli xato xulosalar chiqarishi isbotlangan. Xulosa qismida tibbiy sun’iy intellekt tizimlarining kiberxavfsizlikka chidamliligini oshirish uchun raqobatli o‘qitish va kirish filtrlari joriy etish kabi amaliy tavsiyalar berilgan.

References

Recht B., Roelofs R., Schmidt L., Shankar V. Do CIFAR-10 classifiers generalize to CIFAR-10? // arXiv preprint. – 2018. – arXiv:1806.00451.

Akhtar N., Mian A.S. Threat of adversarial attacks on deep learning in computer vision // IEEE Access. – 2018. – Vol. 6. – P. 14410–14430.

Litjens G., Kooi T., Bejnordi B., Setio A., Ciompi F., Ghafoorian M. A survey on deep learning in medical image analysis // Medical Image Analysis. – 2017. – Vol. 42. – P. 60–88.

Ker J., Wang L., Rao J., Lim T. Deep learning applications in medical image analysis // IEEE Access. – 2018. – Vol. 6. – P. 9375–9389.

Wu Z., Lim S.-N., Davis L., Goldstein T. Making an invisibility cloak: real world adversarial attacks on object detectors // arXiv preprint. – 2019. – arXiv:1910.14667.

Szegedy C., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks // International Conference on Learning Representations (ICLR) 2014. – Banff: Springer, 2014. – P. 1–10.

Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples // arXiv preprint. – 2015. – arXiv:1412.6572v3.

Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards deep learning models resistant to adversarial attacks // arXiv preprint. – 2017. – arXiv:1706.06083v3.

Ozdag M. Adversarial attacks and defenses against deep neural networks: a survey // Procedia Computer Science. – 2018. – Vol. 140. – P. 152–161.

Xu W., Evans D., Qi Y. Feature squeezing: detecting adversarial examples in deep neural networks // arXiv preprint. – 2017. – arXiv:1704.01155v2.

Wang H., Yu C.-N. A direct approach to robust deep learning using adversarial networks // arXiv preprint. – 2019. – arXiv:1905.09591v1.

Carlini N., Wagner D. Towards evaluating the robustness of neural networks // IEEE Symposium on Security and Privacy (SP). – 2017. – P. 39–57.

Szegedy C., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks // arXiv preprint. – 2013. – arXiv:1312.6199.

Moosavi-Dezfooli S.-M., Fawzi A., Frossard P. DeepFool: a simple and accurate method to fool deep neural networks // arXiv preprint. – 2015. – arXiv:1511.04599v3.

Downloads

Published

2026-04-01
Loading...