pexels-ai-bot.jpg

Why AI Can't Replace Engineers in Safety-Critical Design

November 28, 20250 min read

If you needed brain surgery, who would you trust more?

A surgeon who's performed thousands of operations, or someone who's memorized every lecture on brain surgery but never held a scalpel?

The answer is obvious. Experience under real-world constraints beats theoretical knowledge when lives are on the line.

This is exactly how I think about AI in engineering.

The Pattern-Matching Problem

Large language models like ChatGPT and Claude are trained with one core objective: predict the next token based on patterns in text.

When you ask an LLM to suggest a circuit change, it's not calculating voltage drops using Ohm's law. It's saying "based on all the circuit discussions I've trained on, this kind of suggestion usually appears in this context."

It's pattern-matching on language about circuits, not reasoning from physics itself.

I've seen this firsthand. I once got one or two circuit signals wrong on a power electronics section for an ATE board. Those mistakes cost me days of debug, review, and repair. In another design, a single footprint mismatch rendered the entire PCB useless.

AI wouldn't catch these errors without an engineer guiding it to look at the right things.

Great Understanding, Poor Execution

Here's what I've noticed: AI can explain circuits brilliantly. It synthesizes information from textbooks, application notes, and forum discussions.

But when it comes to execution—deciding where to inject a test signal in a specific topology—it falls short.

The explanation task is fundamentally pattern-matching. The execution task requires something different: looking at your particular circuit, recognizing which node breaks the loop without disturbing the operating point, understanding parasitics that might cause measurement artifacts.

That's a chain of reasoning grounded in specific context, not general principles.

Engineers Use Logic, AI Uses Probability

Engineering decisions follow unbending logic. Code always produces what's true (even if the output isn’t what we want, it’s true to its internal logic without exception). The universe operates on laws—without them, everything would collapse.

AI in its current form doesn't have hard-set rules at its fundamental reasoning level.

Yes, engineers use probability in Monte Carlo analysis and reliability predictions. But we're quantifying variation around known physical laws—not guessing at the laws themselves.

We analyze likelihoods of resistor tolerances and temperature variations because we can't simulate every possible state. We need probabilities to plan for uncertainty, not to determine whether Kirchhoff's laws hold.

LLMs operate differently. They have embedded text patterns about physics, not embedded physics itself. There's no conservation of energy constraint in token prediction. No Kirchhoff's laws enforcing consistency.

What Happens When We Replace Logic with "Maybe"

When you replace deterministic reasoning with probabilistic pattern-matching, you end up with systems that sound right but may not act right or be safe to implement.

We can't have things depending on "maybe." Inconsistent outcomes aren't acceptable for safety-critical systems or highly logic-dependent designs like circuit boards.

Even deterministic software has given us brutal lessons. The Therac-25 radiation machine massively overdosed patients when hardware safety interlocks were removed in favor of software checks. The Boeing 737 MAX crashes happened when MCAS software reacted to faulty sensor data.

Those systems weren't probabilistic LLMs—they were deterministic control software designed by experts.

Where AI Actually Helps

I'm not anti-AI. I train models and use AI heavily.

AI (modern LLMs) excels at information retrieval, documentation, and explaining what's happening in a circuit. It can draft ECO documents, suggest candidate parts, and help write Python scripts for repeated simulations.

But the engineer remains responsible for the final decision.

You decide what's relevant. You check if it's correct. You decide how to act on it.

AI is a genius intern with infinite memory and zero judgment (there is some judgment and reasoning, but it needs more training from humans).

The Non-Negotiable Human Element

Humans can step outside the framework entirely. When your simulation says one thing but your bench measurement says another, you can question whether the model itself is wrong, whether the measurement technique is flawed, whether some buried assumption doesn't hold.

You can invent a new mental model on the spot, test it, refine it.

AI can't do that with certainty we can fully trust. It's optimizing for the most probable next token given everything it's seen. If the training data consistently got something wrong, the model will confidently interpolate across that gap.

AI can't notice the absence of something it was never shown.

When something goes wrong, it won't be "the AI did it." It'll be the engineer who signed off, the company that chose to trust a black box without proper oversight.

You don't fire the surgeon because you bought a better library. You give the surgeon better tools.

AI should sit beside us. It just doesn't get to hold the scalpel alone.

Back to Blog