- Forward Future Daily
- Posts
- Navigating the Future of AI: Grammarly’s Knar Hovakimyan on Responsible AI and Agentic Systems in 2025
Navigating the Future of AI: Grammarly’s Knar Hovakimyan on Responsible AI and Agentic Systems in 2025
In 2025, artificial intelligence is poised to undergo a transformative leap with the emergence of agentic AI. Unlike retrieval-augmented generation (RAG) systems, agentic AI autonomously plans and executes tasks, introducing advanced decision-making and task execution capabilities. However, this leap is not without risks, warns Knar Hovakimyan, Responsible AI Lead at Grammarly.
Deploying this new technology without oversight poses significant risks, as agents may misinterpret goals or take actions with insufficient information.
Hovakimyan emphasizes the need for robust monitoring and intervention protocols to mitigate these risks.
From NLP to Responsible AI: A Journey of Purpose
Hovakimyan's path to leading Responsible AI at Grammarly is as nuanced as the technology she oversees. Transitioning from her work on Amazon Alexa to a role focusing on responsible AI, her career reflects a commitment to understanding and addressing the complexities of machine learning. "My journey into responsible AI was gradual," she shares. Early projects focused on internal product standards, such as refining Grammarly's interaction with sensitive user texts. As AI's societal role expanded, so too did her team’s mission—balancing internal goals with external regulatory and ethical frameworks.
This hands-on approach ensures AI systems not only meet internal benchmarks but also align with broader societal expectations of safety, fairness, and transparency. "We don’t just define standards and evaluate performance; we work directly with teams to build safer, user-centered features," she explains. By taking deliberate steps today—implementing safeguards and engaging interdisciplinary teams—organizations can ensure agentic AI serves humanity responsibly.
Painting the AI Landscape of 2025
Hovakimyan envisions a future where businesses and individuals engage with AI in profoundly new ways. Generative AI, once confined to generating text or summarizing data, will evolve into agentic AI capable of autonomous decision-making. "These systems could reshape industries by enabling complex workflows that require minimal human intervention," she explains.
A key prediction from Deloitte’s 2025 report corroborates this: 25% of enterprises using generative AI are expected to deploy AI agents by the end of the year, a figure projected to double by 2027. These agents will enhance productivity and efficiency across industries but will also necessitate entirely new standards for user transparency, control, and education.
Beyond reshaping industries, agentic AI could redefine societal norms around employment, education, and governance, necessitating early dialogue on ethical and legal frameworks.
Agentic AI: Capabilities, Challenges, and Risks
Agentic AI represents a significant shift from current technologies. While RAG systems retrieve and generate data, agentic AI autonomously executes decisions, often with minimal human oversight. This capability comes with substantial risks.
"An AI agent in a hiring context, for instance, could inadvertently make decisions that violate regulatory standards by over-relying on biased data," Hovakimyan explains. "Even well-designed systems can falter without sufficient context or user clarification." This observation echoes predictions that agentic AI will require advanced safeguards, particularly in regulated industries like healthcare and finance.
Hovakimyan also discusses the evolution of hallucinations. "In agentic AI, hallucinations are no longer confined to providing inaccurate answers—they can manifest as inappropriate actions," she warns. For example, a financial agent tasked with "maximizing savings" could erroneously suspend essential payments if it misinterprets its objectives.
The Ethics of Autonomy
Hovakimyan stresses that the ethical challenges of agentic AI cannot be tackled in isolation. An interdisciplinary approach—blending insights from technology, law, sociology, and psychology—is crucial to anticipate and address risks. “It’s not just about improving metrics or reducing bias but setting new standards for transparency and control,” she asserts.
In Grammarly’s case, education and user empowerment are central to its strategy. The platform explains its suggestions, offering users transparency into AI decision-making. "Our goal is to ensure users understand and trust the technology, even as it becomes more complex," she says.
The broader industry is also taking note. NVIDIA predicts that agentic AI will power autonomous robots and virtual agents that assist in complex surgeries, customer service, and supply chain optimization. These advances could revolutionize sectors but will demand unprecedented levels of user education and regulatory oversight.
Preparing for a Future with AI Agents
What does effective risk mitigation look like? Hovakimyan outlines a multipronged approach:
Monitoring and Intervention Protocols: Organizations must implement real-time systems capable of halting or redirecting AI actions if deviations occur.
Human Oversight: Humans will play a critical role in ensuring AI outputs align with intended objectives. She compares this to a co-pilot’s role in aviation, ready to step in when systems fail.
Balancing Efficiency with Safety: "Efficiency gains are important, but they cannot come at the expense of safety," she cautions, advocating for deliberate and informed trade-offs.
NVIDIA experts add that agentic AI could benefit from developments like retrieval-augmented generation (RAG), which grounds outputs in factual data, thereby reducing hallucinations and improving reliability.
Transformative Impacts Across Sectors
Hovakimyan predicts that agentic AI will revolutionize critical sectors like healthcare, finance, and legal. A few hypothetical situations that demonstrate the responsibility and potential risks of agents in these fields include:
Healthcare: Autonomous agents could assist with diagnosis and treatment recommendations, reducing workloads for medical staff. However, hallucinations could lead to misdiagnoses with life-threatening implications.
Finance: Agents might optimize investment strategies but require stringent safeguards to ensure compliance and mitigate risks.
Cybersecurity: AI agents could proactively identify vulnerabilities but must be monitored to avoid overreach or unintended disruptions.
Responsibility as the Cornerstone of AI Development
For Hovakimyan, the journey toward 2025 is one of setting precedents. "How we build and deploy agentic AI today will determine how the industry evolves," she concludes. By prioritizing responsible development and deployment, organizations can unlock AI's transformative potential while safeguarding its role in society.
Her insights underscore a critical truth: the future of AI is as much about ethics and governance as it is about innovation. Organizations must take deliberate steps today, such as implementing robust safeguards and engaging interdisciplinary teams, to ensure agentic AI serves humanity responsibly.
Looking Ahead
Hovakimyan’s vision for 2025 and beyond serves as both a roadmap and a warning. As businesses embrace the possibilities of agentic AI, they must also rise to the challenge of managing its risks. By prioritizing responsible development and deployment, organizations can unlock AI's transformative potential while safeguarding its role in society.
Learn more about Knar Hovakimyan
Knar Hovakimyan leads Grammarly’s Responsible AI team, which is committed to building and improving AI systems that reduce bias and promote fairness. Communication is incredibly personal, and Knar’s team of analytical linguists and machine learning engineers is committed to ensuring that Grammarly’s suggestions and outputs are inclusive, unbiased, and fair for the over 40 million people and 50,000 organizations using the product. Knar started at Grammarly as an Analytical Linguist with a focus on sensitivity and responsible AI. She personally developed the Responsible AI guidelines, processes, and policies that Grammarly follows, secured engineering and research resources, and built up the team. Prior to Grammarly, she worked at Amazon as a research coordinator and program manager for Alexa, where she also focused on natural language processing. Connect with her on LinkedIn. |
Reply