I Fought 2025 AV Laws With a Fake Popup: The Results
I tested the usability of proposed 2025 self-driving car laws with a fake legal popup. The results show a dangerous gap between policy and human behavior.
Alex Keeler
UX researcher and strategist focused on the intersection of emerging tech and human-centered design.
You’re cruising down the highway at 70 mph in your new self-driving car. The sun is shining, your favorite podcast is playing, and you haven’t touched the wheel in an hour. Suddenly, a massive legal disclaimer fills your dashboard screen. It’s dense, it’s confusing, and a timer is counting down from 15 seconds. Do you A) Accept full liability for the car’s next maneuver, B) Defer to the system’s “ethically-weighted judgment,” or C) Request a manual override you’re not prepared to take?
This isn’t a scene from a dystopian sci-fi movie. It’s the user experience nightmare quietly being drafted into law right now. They’re calling it the “2025 Autonomous Vehicle Accord,” and it’s a classic case of well-intentioned policy created in a vacuum, completely divorced from human psychology. As a UX researcher, I saw it not as a safety measure, but as a future catastrophe hiding in plain sight.
So, I decided to run an experiment. I built a simulation of this future, a simple, fake popup designed to be as frustratingly realistic as possible. I wanted to see what people would actually do when faced with a lawyer-approved interface in a high-stakes moment. The results were even more alarming than I’d imagined.
The Looming Threat: Why the 2025 AV Accord Scared Me
On paper, the 2025 AV Accord sounds reasonable. It’s an international framework aimed at standardizing how autonomous vehicles handle liability and critical decision-making. Governments want to ensure that as we hand over control to algorithms, there are clear lines of responsibility. The problem? Their solution is to push that responsibility onto the driver at the worst possible moments through a series of complex consent forms.
Imagine a scenario: your AV is approaching a construction zone where a lane is unexpectedly closed. The car needs to decide whether to brake hard, risking a rear-end collision, or swerve into a narrow gap in traffic. The Accord, as proposed, would require the car to prompt you for a decision. It’s a mechanism for the manufacturer to say, “Hey, we asked them!” It shifts legal liability from the multi-trillion dollar corporation that built the car to the person who was just trying to get to their dental appointment.
This approach ignores decades of research into human-computer interaction:
- Consent Fatigue: We’re already bombarded with cookie banners, privacy policies, and terms of service. We click “Agree” without reading. Applying this model to a moving vehicle is deeply irresponsible.
- Decision Paralysis: When presented with complex choices under pressure, humans don’t suddenly become rational legal experts. We freeze, make impulsive guesses, or ignore the prompt entirely.
- The Illusion of Control: Asking a disengaged driver to make a split-second, life-or-death decision isn’t empowering them. It’s setting them up for failure. True safety comes from a system you can trust, not one that constantly asks for your permission to do its job.
I knew arguing these points in a policy paper would be useless. I had to show, not just tell.
An Experiment in Frustration: Building the “Nightmare Popup”
I needed to replicate the spirit of the Accord in a simple, testable format. I spent a weekend coding a small web-based interactive demo. It was a simple driving animation with a single purpose: to serve up my “Nightmare Popup” at a critical moment.
The popup was my masterpiece of malicious compliance. I designed it to be as unhelpful as possible while still looking like something a corporate legal team would approve.

The key elements were:
- An Ominous Title: “Action Required: Liability Transfer”
- Confusing Legalese: “In accordance with the 2025 AV Accord, please select your preferred risk mitigation protocol for the upcoming traffic anomaly.”
- Impossible Options: The choices were deliberately vague and overlapping. “Accept System Recommendation (Standard Risk Profile),” “Authorize Aggressive Maneuver (Accepts Increased Liability),” and “Decline & Initiate Manual Control.”
- A Pressure-Inducing Timer: A red bar at the bottom showed a 10-second countdown.
The Setup: How I Deployed It
I didn’t need a huge sample size; I just needed to capture genuine human reactions. I posted a link to the demo on a few design forums and tech-focused subreddits with a simple title: “Test my prototype for a future AV interface.” I didn’t mention my opposition or the experiment’s true purpose. Over 48 hours, I logged over 3,500 interactions. I tracked mouse movements, time-to-click, and which option was chosen. I also included a tiny, optional feedback link at the end.
The Alarming Results: What Happens at 70 MPH
The quantitative data was stark, but the qualitative feedback in my inbox was where the story really came to life. People weren’t just confused; they were angry.
Finding 1: Nobody Reads Anything
The average time spent on the popup before clicking was 2.1 seconds. The text I’d so carefully crafted to be confusing didn’t matter. People saw a dialog box in their way and their only goal was to make it disappear. They weren’t making a choice; they were swatting a fly.
Finding 2: The Big Green Button Always Wins
I intentionally made the “Accept System Recommendation” button larger and greener than the others. A staggering 88% of users clicked it, regardless of the scenario. This is classic UX manipulation, but in this context, it demonstrates that design choices, not rational thought, would dictate these “critical” decisions. Only 7% chose to take manual control, and the remaining 5% clicked the “Aggressive” option.
Finding 3: Users Felt Anxious and Powerless
The feedback I received was a goldmine of emotional data. People didn’t feel in control; they felt trapped. One user’s response perfectly encapsulated the sentiment:
“What the hell was that? I felt like I was taking a test I hadn’t studied for. I just clicked the first button to get it off the screen. If that happened in a real car, I’d panic. I’m a good driver, but I wasn’t ‘in the loop’ enough to suddenly take over. It’s the worst of both worlds.”
This user perfectly articulated the core problem. The system demanded they be both fully disengaged and fully prepared to engage at a moment’s notice—a psychological impossibility.
The Core Lesson: Designing for Liability is Designing for Failure
My little experiment confirmed my worst fears. A user interface designed primarily to shield a company from lawsuits does not make users safer. In fact, it does the opposite. It creates a brittle system that relies on a flawed and panicked human to be the final backstop.
This isn’t just about AVs. It’s a lesson for any complex, automated system. When we design for legal compliance first and human experience second, we create products that are, at best, annoying, and at worst, lethally dangerous. The 2025 AV Accord is on a path to codifying bad design into international law.
A Better Way Forward: Principles for Sane AV Interaction
So, what’s the alternative? It’s not to remove all user interaction, but to make it meaningful, intuitive, and rare. Instead of terrifying popups, we need a system built on trust and clarity.
- Graceful Degradation, Not Abrupt Handoffs: The car shouldn’t just scream for help. It should communicate its intentions clearly and early. “Heavy traffic ahead, preparing to slow down.” “Construction detected, merging left in 10 seconds.” This keeps the human in the loop without demanding they take command.
- Preference, Not Prescription: Let users set their driving “style” beforehand. Are you a cautious driver? Aggressive? The car can use this profile to make decisions, which you agree to in a calm, non-driving context (like your garage).
- Plain Language, Always: No legalese. No “risk mitigation protocols.” Just simple, clear communication. “The car will now change lanes. To cancel, touch the screen.”
The road to our autonomous future is being paved right now. The decisions we make today—the standards we set, the assumptions we challenge—will determine whether that future is one of seamless convenience or one of anxiety-inducing, liability-dodging popups. As designers, engineers, and future passengers, we have a responsibility to advocate for the human on the other side of the screen. Let’s build systems we can trust, not systems we have to fight.