Leveraging Artificial Intelligence to Automate and Enhance Security Operations: Balancing Efficiency and Human Oversight
Downloads
It is now clear that as new cyberthreats emerge, old security measures are less successful in thwarting increasingly sophisticated cyberattacks. One of the most significant and influential technologies of the last few years, artificial intelligence (AI) is a game-changer for security operations because it can automate critical processes, maintain real-time threat recognition, and respond with equal speed and efficiency. Based on the automation of threats, vulnerabilities, and events, this abstract analyzes the use of AI in security. It looks at specific AI use cases, such as automated patch management, UEBA, and machine learning-based anomalous activity detection, and explains how the technologies greatly increase the speed and accuracy of thwarting cyberthreats.
As a tool, AI offers massive advantages: it helps minimize human effort, act faster, and scale easily but is not without risks inherent to operating on the internet. Namely, some responsibilities involve inputting relevant context, making ethical decisions, and interpreting what AI systems have to offer to human operators will always be needed.This abstract also explores the further discussion of the integration of AI security processing capabilities and experienced security personnel so that both facets are achieved efficiently and rightfully ethically.
Finally, the abstract concludes that the key to the future advancement of cybersecurity is the ability to find the best conditions for the collaboration of artificial intelligence and human thought, to strengthen the security of organizations and reduce the consequences of plan implementation.
Downloads
1. Mughal, A. A. (2018). Artificial Intelligence in Information Security: Exploring the Advantages, Challenges, and Future Directions. Journal of Artificial Intelligence and Machine Learning in Management, 2(1), 22-34.
2. Tschider, C. A. (2018). Regulating the internet of things: discrimination, privacy, and cybersecurity in the artificial intelligence age. Denv. L. Rev., 96, 87.
3. Lui, A., & Lamb, G. W. (2018). Artificial intelligence and augmented intelligence collaboration: regaining trust and confidence in the financial sector. Information & Communications Technology Law, 27(3), 267-283.
4. Cummings, M. L., Roff, H. M., Cukier, K., Parakilas, J., & Bryce, H. (2018). Artificial intelligence and international affairs. Chatham House Report, 7-18.
5. Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center Research Publication, (2018-6).
6. Grooms, G. B. (2013). Artificial intelligence applications for automated battle management aids in future military endeavors (Doctoral dissertation, Monterey, CA; Naval Postgraduate School).
7. Gaon, A., & Stedman, I. (2018). A call to action: Moving forward with the governance of artificial intelligence in Canada. Alta. L. Rev., 56, 1137.
8. Mikhaylov, S. J., Esteve, M., & Campion, A. (2018). Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration. Philosophical transactions of the royal society a: mathematical, physical and engineering sciences, 376(2128), 20170357.
9. Tschider, C. A. (2018). Deus ex machina: Regulating cybersecurity and artificial Intelligence for patients of the future. Savannah L. Rev., 5, 177.
10. Kertysova, K. (2018). Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Security and Human Rights, 29(1-4), 55-81.
Copyright (c) 2018 Gourav Nagar
This work is licensed under a Creative Commons Attribution 4.0 International License.