Policy

While the West Fights About AI, Vietnam Is Actually Building the Rules

The developing world is leapfrogging on AI governance while Brussels and Washington argue

2026-04-07 ‱ Source: LuatVietnam
While the West Fights About AI, Vietnam Is Actually Building the Rules

Listen up, tech bros and policy wonks. While Silicon Valley is busy hosting congressional circuses where senators ask if AI can turn on us like Terminator, and the EU is still figuring out whether their AI Act should be 100 pages or 200 pages, Vietnam just quietly walked into the room, rolled up its sleeves, and started building actual AI regulations.

Yeah, you read that right. Vietnam.

The same country your grandpa probably still associates with wartime documentaries is now positioning itself as the pragmatic voice of reason in global AI governance. The Ministry of Science and Technology (MOST) isn't just talking about AI risk—they're drafting a framework that categorizes AI systems by risk levels and defines what "high-risk" actually means. They're consulting stakeholders. They're being specific. They're being practical.

And honestly? It's making the West look a little silly.

What Vietnam Is Actually Doing (Spoiler: It's Smart)

Here's the meat of it. The National Institute of Digital Technology and Digital Transformation is working with various units under MOST to create a definitive list of high-risk AI systems. But they're not just copy-pasting the EU's homework—they're adapting it for reality.

The framework divides AI into three risk buckets: high, medium, and low. Simple. Clean. No PhD required to understand it.

For high-risk systems, Vietnam is targeting the usual suspects—healthcare AI that makes treatment decisions, recruitment algorithms that can make or break careers, finance systems that control money flows, and infrastructure that keeps the lights on. But here's where it gets interesting: they're adding nuance that the EU's sweeping regulations often miss.

Take healthcare. Vietnam isn't saying "all medical AI is high-risk" like some regulatory sledgehammer. Instead, they're proposing that AI systems only hit the high-risk category when they "directly decide on or perform procedures without independent clinical oversight." That's a crucial distinction. An AI that recommends a treatment plan for a doctor to review? Medium risk. An AI that's about to cut you open without a human in the loop? High risk. Very high risk.

This is what smart regulation looks like, folks. Context-aware. Sector-specific. Reality-based.

The Developing World Is Leapfrogging—Again

Remember when everyone laughed at the idea that developing nations would skip landlines and go straight to mobile? That same energy is happening with AI governance right now, and most of the West is too busy arguing to notice.

Vietnam isn't starting from scratch with legacy regulations that need retrofitting. They're building from the ground up with 2025 sensibilities. No horse-and-buggy laws trying to regulate self-driving cars. No 1990s data protection frameworks being awkwardly stretched to cover neural networks.

The proposed criteria for high-risk classification hit all the right notes:

But here's the kicker—they're also building in common-sense exemptions. Systems doing purely technical functions like data collection or classification? Not high-risk if they don't directly affect rights. Corporate internal systems with no external impact? Lower risk tier. Analysis and forecasting tools that are explicitly advisory? You get the idea.

This is regulatory craftsmanship. The kind that comes from watching the West stumble and deciding there's a better way.

Learning From the EU, Without the Bureaucratic Bloat

Let's be real—the EU AI Act is groundbreaking, but it's also 272 pages of dense legal text that even AI companies struggle to parse. It's the regulatory equivalent of a Swiss Army knife with 500 tools when you just need to open a bottle.

Vietnam is taking the ideas from the EU model—the risk-based approach, the sector-specific thinking, the focus on fundamental rights—but they're packaging it in a way that businesses can actually implement without hiring an army of compliance lawyers.

The three-tier risk system (high, medium, low) is elegant. The self-classification approach for providers—with proper oversight for the high-risk stuff—strikes the right balance between innovation and safety. And the consultation process with sector-specific units means the regulations will actually reflect on-the-ground realities in healthcare, education, finance, and transport.

This isn't regulatory light-touch nonsense. It's regulatory smart touch. Touching what matters, leaving room for innovation where it doesn't.

The Economic Chess Game Nobody's Talking About

Okay, time for some real talk. This isn't just about AI safety—this is Vietnam playing 4D economic chess while the West is still reading the rulebook.

Vietnam has been on an absolute tear economically. Manufacturing hub. Tech outsourcing destination. Rising middle class. Now they're positioning themselves as the sensible middle ground for AI development—a jurisdiction with clear rules that don't strangle innovation.

Think about what this means for foreign investment. Companies looking to develop AI in Asia have three broad options:

  1. China: Massive market, but regulatory uncertainty and geopolitical tension
  2. Singapore: Great infrastructure, but expensive and crowded
  3. Vietnam: Emerging, affordable, and increasingly clear on the rules

Vietnam is building a competitive advantage through governance clarity. That's not an accident. That's strategy.

By defining high-risk AI categories now, before the technology becomes ubiquitous, Vietnam is creating a predictable environment for AI companies. Predictability attracts capital. Capital drives development. Development creates jobs. Jobs drive growth. It's the economic development playbook, but applied to the AI era.

Why High-Risk Categorization Is the Foundation That Matters

Here's a hot take that might ruffle some feathers: Most AI "regulation" discussions in the West are just vibes and fear-mongering. "AI is scary!" "AI might kill us!" "We need to do something!"

Vietnam looked at that noise and said, "Nah, we're going to define what 'dangerous' actually means first."

The categorization of high-risk AI isn't bureaucratic box-ticking—it's the foundational work that makes actual regulation possible. You can't regulate what you can't define. You can't enforce standards that don't exist. You can't hold companies accountable to vague feelings about AI safety.

By establishing clear criteria for high-risk systems, Vietnam is creating the scaffolding for:

This is the unsexy but essential work of governance. Vietnam is doing it while others are still drafting position papers.

The Sectors in the Crosshairs (And Why It Makes Sense)

Let's look at where Vietnam is focusing its high-risk scrutiny:

Healthcare: AI recommending invasive procedures without human oversight. Obviously high-risk. This isn't rocket science—if a machine can cut you open, we should probably have standards around that.

Finance and Banking: AI systems making lending decisions, fraud detection, trading algorithms. When money and livelihoods are on the line, oversight matters.

Recruitment and Employment: Algorithms that can systematically exclude candidates based on patterns in training data. This is where bias becomes discrimination at scale.

Transport: Autonomous systems, traffic management, logistics. When AI controls vehicles or critical infrastructure, failures aren't bugs—they're potentially fatal.

Energy: Grid management, infrastructure control. See above about things that keep society running.

Justice and Public Administration: AI in legal proceedings, government decision-making. When the state uses AI, the stakes for citizens are existential.

This isn't "regulate everything because AI is scary." This is "regulate specific applications where failure modes matter." It's targeted. It's proportionate. It's adult.

đŸ”„ Our Hot Take: Vietnam Is Teaching the West a Lesson

Alright, buckle up, because here's where we get spicy.

The West has been absolutely embarrassing itself on AI governance. The US Congress holds hearings where they demonstrate profound technical illiteracy. The EU writes regulations so comprehensive they're incomprehensible. Everyone is either panic-banning things they don't understand or arguing about theoretical future scenarios while ignoring present-day harms.

Vietnam just showed up and said, "What if we just... made reasonable rules?"

No grandstanding. No existential panic. No 500-page documents written by committees of committees. Just practical, sector-specific frameworks that balance innovation with safety.

The lesson here isn't that developing nations are catching up—it's that they might be pulling ahead on governance sophistication. Vietnam gets to learn from the West's mistakes without inheriting its baggage. No legacy tech regulations to work around. No entrenched industry lobbying that dilutes every rule. No political theater substituting for policy.

When historians look back at AI governance, they might see Vietnam as the country that did the foundational work right while others were still arguing about whether AI deserves rights.

The Bottom Line

Vietnam's move to define high-risk AI categories isn't just another regulatory announcement to scroll past. It's a signal that the global center of gravity for AI governance might be shifting.

While the West produces hot air, Vietnam is producing frameworks. While developed nations argue about principles, a developing nation is building practical implementation. While everyone else talks about AI risk in the abstract, Vietnam is categorizing it, measuring it, and preparing to regulate it.

This is what leadership looks like in the AI era—not the loudest voice in the room, but the one actually getting things done.

For AI companies, this should be a wake-up call. The regulatory landscape isn't just about Brussels and Washington anymore. Smart jurisdictions are building competitive advantages through clear, reasonable AI governance. Vietnam is positioning itself as one of them.

For policymakers elsewhere, take notes. The three-tier risk system. The sector-specific approach. The consultation with actual experts. The balance between innovation and safety. This is the template.

And for the rest of us watching from the sidelines? Maybe it's time to stop assuming that good governance only comes from the usual suspects. Vietnam just entered the chat, and they're bringing receipts.


This article may contain affiliate links. We only recommend products we believe in.

Sources:

Enjoyed this analysis?

Share it with your network and help us grow.

More Intelligence

Policy

Chinese AI Firms Are Marketing Iran War Intelligence on US Military Movements — And Nobody's Stopping Them

Policy

70% Chance of Extinction — The Ex-OpenAI Researcher Who Quit Because Nobody Was Listening

← Back to Home View Archive →