Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
AI modifies the operation of companies. Although a large part of this change is positive, it presents unique cybersecurity problems. New generation AI applications such as the AI agent have a particularly remarkable risk for the organization’s safety posture.
What is the agent Ai?
The agentic AI refers to AI models which can act independently, often automating whole roles with little or no human entry. Advanced chatbots are among the most important examples, but AI agents can also appear in applications like Business Intelligence, medical diagnostics and insurance adjustments.
In all use cases, this technology combines generative models, natural language treatment (NLP) and other automatic learning functions (ML) to perform tasks in several stages independently. It is easy to see the value in such a solution. Naturally, Gartner predicts that a third party Of all the geneal interactions of AI will use these agents by 2028.
The unique security risks of agentic AI
The adoption of agentic AI will increase while companies seek to perform a wider range of tasks without a larger workforce. As promising as it is, however, to give a model of AI so much power has serious implications for cybersecurity.
AI agents generally require access to large amounts of data. Consequently, these are main targets for cybercriminals, as attackers could concentrate efforts on a single application to expose a considerable amount of information. It would have an effect similar to whale hunting – which led to $ 12.5 billion in losses In 2021 only – but can be easier, as AI models could be more sensitive than experienced professionals.
Another concern is the autonomy of agentic AI. Although all ML algorithms have certain risks, conventional use cases require human authorization to do anything with their data. Agents, on the other hand, can act without authorization. Consequently, any accidental exposure to confidentiality or errors like hallucinations ai can slip without anyone noticing it.
This lack of supervision makes threats of existing AI such as intoxication with data all the more dangerous. Attackers can corrupt a model by simply modifying 0.01% of its training data setAnd this is possible with a minimum investment. This is harmful in any context, but the defective conclusions of a poisoned agent would reach much further than that where humans first examine the results.
How to improve cybersecurity from the AI agent
In the light of these threats, cybersecurity strategies must adapt before companies implement agentic AI applications. Here are four critical steps towards this goal.
1. Maximize visibility
The first step is to ensure that security and operating teams have full visibility on the workflow of an AI agent. Each task that the model finishes, each device or application to which it connects and all the data to which it can access must be obvious. The revelation of these factors will facilitate potential vulnerabilities.
Automated network mapping tools may be necessary here. Only 23% of IT leaders Suppose they have complete visibility in their cloud environments and that 61% use several detection tools, leading to double records. Administrators must first solve these problems to obtain the necessary information on what their AI agents can access.
Use the principle of the smallest privileges
Once it is clear with which the agent can interact, companies must restrict these privileges. The principle of the smallest privilege – which maintains that any entity cannot see and use what it absolutely needs – is essential.
Any database or application with which an AI agent can interact is a potential risk. Consequently, organizations can minimize relevant attack surfaces and prevent lateral movement by limiting these authorizations as much as possible. Anything that does not directly contribute to the Value Driving objective of the AI must be prohibited.
Limit sensitive information
Similarly, network administrators can prevent confidentiality violations by deleting the sensitive details of the data sets to which the agent’s AI can access. The work of many AI agents naturally involves private data. More than 50% of the expenditure generating AI will go to chatbots, which can collect information on customers. However, all these details are not necessary.
Although an agent learns the interactions of past customers, he does not need to store names, addresses or payment details. System programming to rub the personally identifiable information from useless data accessible to AIA will minimize damage in the event of a violation.
Monitor suspicious behavior
Companies must also be careful when programming the agent. First apply it to a single user case and use a diversified team to examine the model for signs of biases or hallucinations during training. When the time comes to deploy the agent, move it slowly and watch it for suspicious behavior.
Reactivity in real time is crucial in this surveillance, because the risks of the agents mean that any violation could have dramatic consequences. Fortunately, automated detection and response solutions are very effective, saving an average of $ 2.22 million in data violation costs. Organizations can slowly extend their AI agents after a successful test, but they must continue to monitor all applications.
As cybersecurity progresses, cybersecurity strategies must also
The rapid AI progress is promising significant for modern companies, but its risk of cybersecurity is increasing as quickly. Cyber-defenses of companies must evolve and progress in parallel with cases of generative use of AI. Not following these changes could cause damage that prevails over the advantages of technology.
The agency AI will bring ML to new heights, but the same goes for related vulnerabilities. Although this does not make this technology too dangerous to invest, it justifies additional caution. Companies must follow these essential security steps when they deploy new AI applications.
ZAC Amos is a functional editor at Reinstall.
DATADECISIONMAKERS
Welcome to the Venturebeat community!
Data data manufacturers are the place where experts, including technicians who do data work, can share data -related information and innovation.
If you want to read on advanced ideas and up-to-date information, best practices and the future of data and data technology, join us at datadecisionmakers.
You could even consider contributing your own article!
Learn more about datadecisionmakers