New Security Norms in the Era of AI Agents: Risks Hidden in the "Depth" of Input and Output
As the use of AI agents progresses, new security challenges that differ from traditional generative AI are beginning to emerge. Conventional generative AI primarily focuses on generating responses based on user input, and the main risks could be understood through the verification of input content and output results. However, AI agents can make decisions on their own, invoke tools, manipulate files, and communicate with external systems, making their behavior more complex. As a result, there is an increased likelihood that unexpected risks may lurk within the "internal movements" that are not easily visible from the outside.
In this seminar, we will organize the unique risks associated with AI agents and explain them clearly while incorporating actual attack scenarios that could occur. We will also introduce diagnostic and monitoring approaches that focus on the internal workings of agents to address these risks. This content will be useful not only for those considering the introduction and utilization of AI agents but also for anyone wanting to understand the direction of future security measures.
<Target Audience>
Business leaders and personnel considering strengthening or complementing security through AI utilization.

| Date and time | Tuesday, Feb 17, 2026 04:05 PM ~ 04:50 PM |
|---|---|
| Entry fee | Free |
Inquiry about this news
Contact Us OnlineMore Details & Registration
Details & Registration




