AI agents: from chat to action

AI agents: from chat to action

The conversation around artificial intelligence has shifted. Until recently, many organisations used “using AI” as a shorthand for generating text or automating a single task. Now the spotlight is moving to systems that plan, decide, and act with greater autonomy: AI agents (or agentes IA).

In Europe’s innovation ecosystem—start-ups, corporates, universities, and EU-funded consortia—this leap is especially significant. On the one hand, it boosts productivity and unlocks new business models. On the other, it raises the bar for security, governance, and compliance in an increasingly demanding regulatory environment. The EU has already adopted a risk-based approach to AI and sets specific obligations for certain uses.


What are AI agents?

An AI agent isn’t just a model that answers questions. An agent is a system designed to achieve a goal and, to do so, it can break work into steps, use tools, retrieve information, make intermediate decisions, and verify outcomes.

Put simply: if a chatbot “responds”, an agent “gets things done”.

In practice, many current agents rely on large language models (LLMs) and act like a “brain” that directs actions: calling APIs, running searches, editing documents, querying databases, or interacting with interfaces. That capability—where the system steers its own process and tool use—is central to recent technical explanations of “agentic” systems.

How to recognise an agent (without drowning in jargon)

A system behaves like an agent when it meets several of these criteria:

  • A clear goal: it works towards a defined outcome (e.g., “prepare a briefing”, “optimise a logistics plan”).
  • Planning: it chooses steps and adapts strategy when conditions change.
  • Tool use: it goes beyond text and takes actions (APIs, CRM, spreadsheets, browsers, repositories, etc.).
  • Memory and context: it retains useful information (with controls) so it doesn’t start from scratch every time.
  • Verification: it checks, validates, or asks for confirmation when uncertainty appears.

Single agents and multi-agent systems

Sometimes one agent is enough. However, more advanced cases coordinate multiple specialised agents: one analyses, another drafts, another verifies, another integrates with tools. This aligns with what several industry guides describe as “orchestration” and multi-agent systems.


Why are AI agents so important?

Their importance isn’t only about doing existing tasks faster. It comes from a combination of three shifts.

1) AI agents: from assistance to execution

An agent can move from “recommending” to “doing”. Instead of giving you a checklist, it carries out the steps (if you grant permissions and provide a controlled environment). That’s why many platforms emphasise “workflows” and agent deployment as systems that complete tasks end to end.

2) They scale knowledge work productivity

European organisations run on knowledge-heavy processes: proposals, compliance, reporting, project management, tech scouting, market analysis, due diligence… AI agents reduce friction in repetitive work and free up time for strategy.

And when an agent connects to internal sources—documentation, repositories, ERP, CRM—the impact compounds: it doesn’t “guess from the outside”, it operates on your actual operating context.

3) They change how products and services are designed

With agents, you don’t just optimise processes—you can build agentic products: a service that schedules meetings, a system that manages incidents, a copilot that executes campaigns, or a technical assistant that diagnoses and acts.

That said, this power also demands responsibility. Guidance around “computer use” and action tools highlights risks such as prompt injection, excessive permissions, or unintended actions if instructions aren’t clear.

4) They fit Europe’s agenda (innovation + regulation)

Europe wants adoption and competitiveness, but it also requires safeguards. The AI Act is built on a risk classification model and sets obligations for certain “high-risk” systems, alongside prohibitions on unacceptable practices.

As a result, AI agents are not only a technical opportunity—they are also a governance challenge that sits squarely within funded innovation projects, where audits, traceability, and ethics often form part of the deliverables.


Applications of AI agents

The practical question is inevitable: “Where do I put them to work tomorrow?” Here are real-world applications, grouped by function.

Operations and back office

  • Intelligent document management: classify, extract data, spot inconsistencies, and prepare final versions.
  • Finance and procurement: reconciliation, invoice analysis, validations, milestone tracking, and alerts.
  • Internal support: an agent that answers questions about policies, processes, and tools—and also opens tickets or updates statuses.

Sales, marketing, and customer success

  • Account research: the agent gathers signals, creates a briefing, and proposes personalised messaging.
  • Campaign automation: generate variants, segment, schedule, and measure results (with human approval).
  • Actionable customer support: it doesn’t just reply; it can also execute changes in systems (e.g., reroutes, updates, returns) within defined limits.

R&D, engineering, and product

  • Development agents: from task planning to taking actions in repositories and engineering tools (always with controls).
  • QA and testing: generate test cases, guide execution, analyse logs, and report results.
  • Roadmap management: synthesise feedback, support prioritisation, and keep documentation consistent.

Innovation and EU funding

This is where the fit is especially strong:

  • Call scanning and fit assessment: an agent can read requirements, identify criteria, and propose a proposal strategy.
  • Consortium building: identify profiles, capabilities, and complementarities.
  • Writing and coherence: align objectives, impact, risks, exploitation, ethics, and work plan.
  • Reporting and justification: draft reports, trace evidence, and control deliverables.

In Kaila, these workflows matter because the value isn’t just “using AI”—it’s connecting it to structured information and actionable decisions across the innovation lifecycle.


Examples of AI agents in EU-funded projects

HIVEMIND | Human-centred collaboratIVE MultI-ageNt framework for accelerating software Development and maintenance

HIVEMIND is developing an LLM-based multi-agent framework designed to support software development and maintenance. The idea is to let multiple AI agents collaborate with human roles in a development team (e.g., analysing requirements, generating code, testing, and maintaining systems), so the workflow becomes faster and more reliable.

Funding (public data):

  • Programme: Horizon Europe
  • Total cost: (“No data”)
  • EU contribution: €4,569,056.77

MOSAICO | Management, Orchestration and Supervision of AI-agent COmmunities for reliable AI in software engineering

MOSAICO focuses on AI-agent communities for software engineering: instead of relying on a single model/assistant, it orchestrates groups of specialised agents that communicate, debate, and verify outputs. The project targets key challenges like hallucinations and bias, using governance and quality checks to improve trust in multi-agent work.

Funding (public data):

  • Programme: Horizon Europe
  • Total cost: €5,218,186.88
  • EU contribution: €5,218,186.88

AIXPERT | An agentic, multi-layer, GenAI-powered backbone to make an AI system explainable, accountable, and transparent

AIXPERT is building an agentic AI backbone to make AI systems more explainable, accountable, and transparent. In practice, it frames this as an AI-agentic platform that can support consistent explainability and governance across different models and applications.

Funding (public data):

  • Programme: Horizon Europe
  • Total cost: €7,499,753.75
  • EU contribution: €7,499,753.75

CyberAId | AI-Driven Cybersecurity for Financial Service Providers

CyberAId is presented as a project that will deploy a novel agentic AI infrastructure to coordinate and orchestrate cybersecurity tools and services, with a focus on financial-sector critical infrastructures. It clearly matches an agentic / orchestration framing, even though it sits more on the cybersecurity side of your “AI agents” narrative.

Funding (public data):

  • Programme: Digital Europe Programme
  • Total cost: €7.440.248
  • EU contribution: €4.999.898

AIDA | Artificial Intelligence Deployable Agent

AIDA (Artificial Intelligence Deployable Agent) aims to develop prototype AI-based cyber defence agents capable of autonomous and semi-autonomous actions across the cyber incident management lifecycle (detection, analysis, response, and support to operators). It’s a very direct “AI agents” use case, but in defence / cyber rather than enterprise productivity.

Funding (public data):

  • Programme: EDF
  • Estimated total cost: €32,453,332.28
  • Maximum EU contribution: €26,000,000.00

A crucial note before deploying AI agents: control, security, and compliance

For an agent to be useful in real environments, it’s not enough that it “works”. It must operate with boundaries:

  • Least-privilege permissions: access only what’s necessary.
  • Isolated environments: especially if the agent can use “computer use” or execute sensitive actions.
  • Traceability: logs, recorded decisions, source evidence.
  • Human review: for critical decisions (finance, legal, security, health).
  • Alignment with the European framework: classify the use case and meet transparency/obligations when applicable.

AI agents aren’t a promise anymore—they’re a competitive advantage

AI agents mark the shift from AI as a “content tool” to AI as an “execution system”. That’s why they’re moving into the core of operations, product, and innovation.

In Europe, the opportunity is twofold: accelerate competitiveness while doing so with safeguards. The organisations that design agents with the right governance, security, and ambition-to-risk fit will lead in both projects and markets.