Startup Fundraising

Tynapse Raises $3.17M Seed for AI Agent Security Infrastructure

Tynapse lands $3.17M seed round from Mirae Asset Venture Investment and others to build AI agent runtime security and trust infrastructure.

Share:
AM
Alvaro de la Maza

Partner at Aninver

Stay ahead of the market

Get instant notifications when new news matching "Artificial Intelligence (AI), Technology, Software & Gaming in South Korea" are published.

Key Takeaways

  • Tynapse raised $3.2M (Seed) from Mirae Asset Venture Investment, Mirae Asset Capital, Murex Partners, Kakao Ventures, Mashup Ventures.
  • Sector: Artificial Intelligence (AI), Technology, Software & Gaming.
  • Geography: South Korea.

Analysis

South Korean startup Tynapse has successfully closed a $3.17 million seed funding round to develop critical runtime security infrastructure for AI agents. This significant investment, achieved just six months after the company's inception, underscores the growing demand for robust safety mechanisms as AI agents transition into live enterprise operations. The funding was spearheaded by Mirae Asset Venture Investment, with active participation from Mirae Asset Capital, Murex Partners, and Kakao Ventures. This follows an earlier $141K angel investment from prominent unicorn founders and AI researchers, as well as a pre-seed injection from Mashup Ventures.

Tynapse is architecting what it terms an "AI Trust Layer," a sophisticated system designed to validate and govern the actions and outputs of AI agents in real-time. The platform employs a dual-stage detection and judgment framework to proactively identify and neutralize emergent threats. These threats include AI hallucinations, inadvertent data leakage, sophisticated jailbreaking attempts, and unauthorized privilege escalations, all of which pose substantial risks to enterprise deployments. A key feature of the system is its automatic logging of every decision, creating a legally admissible audit trail that ensures accountability and traceability.

The urgency for such solutions is mounting as AI agents move beyond experimental phases and become integral to core business workflows. Operational risks, such as factual inaccuracies, sensitive data exposure, and misuse of system privileges, represent a primary impediment to widespread enterprise adoption of AI technologies. Tynapse aims to address this critical gap, initially targeting the financial services sector, with plans to expand its reach into healthcare, public administration, and other enterprise markets, aspiring to become a global leader in AI security infrastructure.

The founding team brings a formidable blend of expertise. Minseung Kang, CEO of Tynapse, previously served as a CTO in the financial industry, possessing deep experience in AI trust and safety research and implementation. The team is further bolstered by AI specialists from Google Research and seasoned engineers with backgrounds in managing large-scale operational systems. The company's technological prowess has already garnered international recognition, with Tynapse achieving finalist status in prestigious competitions like the NVIDIA GTC 2026 AWS Startup Pitch and the 2026 Snowflake Startup Challenge, notably as the sole Asian contender.

Investors expressed strong conviction in Tynapse's potential. Jinhwan Cho, Director at Mirae Asset Venture Investment, highlighted the team's unique combination of financial sector experience, security acumen, and advanced AI capabilities, stating, "We invested because we believe they can actually solve the AI reliability problem." Similarly, Jungho Shin, Senior Associate at Kakao Ventures, emphasized the emerging necessity of trust infrastructure as AI moves into execution layers, calling it a "must-have category that determines the scalability of the entire industry."

Minseung Kang, CEO of Tynapse, articulated the company's vision: "As AI moves into the execution layer, trust infrastructure has become a must-have category that determines the scalability of the entire industry. In the age of AI agents, operational reliability and regulatory compliance matter just as much as model performance. Our goal is to grow into a global standard infrastructure company—one that makes AI trustworthy, no matter where or what it executes." The company is currently engaged in proof-of-concept trials with major domestic banks, signaling an imminent commercial launch.