Human Judgement in an AI World: Why Scrap and Industrial Businesses Still Need a Real Insurance Advisor
Tropolis is an innovative insurance broker dedicated to empowering independent insurance agencies with the tools, technology, and support for accelerated growth. This article is part of a series featuring insights and stories from our valued Tropolis partners and employees.
Every week I hear some version of the same question from owners and CFOs:
“If AI can analyze contracts and policies, why do I still need a human insurance advisor?”
It is a fair question, especially in a world where every vendor claims to “use AI” and every renewal feels more expensive and more complicated than the last one.
I am biased, of course. I am a commercial insurance agent who also happens to be a licensed attorney. I use technology heavily, including AI tools, inside my own practice. But I have also seen, in real accounts and real claims, where a purely automated approach breaks down.
This is not an argument against technology. It is an argument for understanding what AI is good at, what it is bad at, and why human judgment, especially when it is informed by legal training, still matters on complex risks like scrap, recycling, and industrial operations.
What AI is Good at in Insurance
AI is genuinely useful in our world, and clients should expect their broker to use it behind the scenes.
1. Data cleanup and pattern spotting
Taking years of loss runs and quickly flagging frequency patterns, injury types, or locations that deserve more attention.
2. Drafting and summarizing
Turning long policy forms into workable summaries, comparing versions of contracts, or organizing underwriter questions and responses.
3. Checklists and reminders
Making sure routine tasks do not get missed, such as certificate renewals, location schedules, and policy audits.
These are “many documents, repetitive logic, lots of data” problems. AI is very good at those.
Where people get into trouble is assuming that because AI can read documents and generate text, it can also replace the hard parts of commercial insurance work: judgment, negotiation, and accountability when something goes wrong.
Where AI Struggles: Context, Conflict, and Real World Consequences
These are “many documents, repetitive logic, lots of data” problems. AI is very good at those.
Where people get into trouble is assuming that because AI can read documents and generate text, it can also replace the hard parts of commercial insurance work: judgment, negotiation, and accountability when something goes wrong.
1. Conflicting contracts and unclear power dynamics
Your mill contract, your hauler agreement, your vendor terms, and your lease might all push risk in different directions. On paper, they can each look fine. In the real world, one counterparty has all the leverage, one contract will actually be enforced, and one “standard” indemnity provision will control the rest.
AI can read each contract. It cannot sit across from you and say, “In this relationship, if something goes wrong, here is who gets blamed first, here is who really has the leverage, and here is how your insurance responds in that specific scenario.”
2. Coverage that only works when you connect the dots
Pollution, professional, cyber, crime, umbrella, and manuscript endorsements do not live in separate universes. They interact. A pollution exclusion in your GL might reference coverage that only exists in a stand-alone policy, subject to its own retro date and reporting provisions.
AI can summarize each form. It is much harder for it to responsibly tell you, “Here is the single most likely way this structure will fail you in a claim, and here is how to fix that before a loss.”
3. Credibility with underwriters and claim handlers
When an underwriter is on the fence about your account, or when a claim is ugly and contentious, your broker’s credibility matters. Underwriters and adjusters are human. They remember which agents send sloppy submissions and unrealistic demands, and which agents send organized narratives and reasonable, well argued positions.
AI can help draft, but it cannot build real world trust over years of negotiations and claim advocacy.
How Legal Training Changes the Conversation
Legal training does not make someone a better person. It does, however, change how you see risk and documents.
When I work with scrap and industrial clients, my attorney brain is doing a few specific things that are hard to automate responsibly.
1. Reading contracts and policies as a system, not as separate documents
Lawyers are trained to ask, “What happens if this goes wrong, and who is in the best position to pay?”
When I look at a client’s world, I am not just reading the insurance policies. I am looking at leases, customer and supplier contracts, transportation agreements, and vendor terms, then matching them against policy language.
The point is not to practice law for the client. The point is to see where risk is silently migrating into places their current stru
cture does not cover.
2. Focusing on scenarios, not just clauses
Most people, including many seasoned professionals, read policies and contracts in a static way. They look at individual clauses and endorsements.
Legal training pushes you to build fact patterns instead. For example: a serious truck accident in another state involving a leased trailer and a subcontract hauler, with a dispute about who loaded the material and who owns it at the time of loss.
When you build out scenarios like that, you see very quickly whether the current program and contracts are aligned or not.
3.Anticipating disputes and how they will actually be resolved
In the real world, cases do not get resolved based only on what the paper “says.” They resolve based on who notices what first, who has the cleanest documentation, and how the story will play in front of a judge, jury, or arbitrator.
An advisor with legal training is more likely to ask, “How will this look if emails and texts get pulled in discovery?” or “What does your internal investigation process look like when something goes wrong?” Those questions change how you set up coverage now, long before a claim.
None of this replaces the role of your own corporate counsel. It does mean your insurance advisor is thinking about your program the way a lawyer thinks about a file, which is a very different lens than “did we shop the market and get three quotes.”
The Right Model is Human Plus Technology, Not One or the Other
The best outcomes I see for clients do not come from ignoring technology. They come from combining good tools with experienced, accountable humans.
In practice, that looks like:
- Using AI and data tools to clean up inputs, spot patterns, and save time on low value work.
- Using human judgment to prioritize what matters, negotiate with underwriters, and explain tradeoffs to leadership.
- Using legal and industry experience to pressure test the program against real world scenarios, not just spreadsheets.
If your broker tells you they “use AI,” that is fine. The better question is: who, specifically, is accountable for the advice you get, and how do they think about your risk?
Questions to Ask Your Broker in an AI World
Whether you are a scrap yard, a recycler, a manufacturer, or any other industrial business, here are a few questions you can put on the table at your next renewal meeting:
1. Who is actually responsible for the design of our program, and what is their background?2. How do you use technology to improve our program, and where do you intentionally keep humans in the loop?
3. When you look at our contracts and our policies together, what is the single biggest scenario that keeps you up at night?
4. If we had a catastrophic claim tomorrow, what is your plan for working with the carrier and with our legal team?
5. What is one change you would make to our structure this year if budget were not an issue?
You do not need to become an insurance expert overnight. You do deserve an advisor who behaves like one, who can explain the tradeoffs in business terms, and who is willing to own their recommendations when it matters.
Technology will keep getting better. That is a good thing. The stakes for your business are not just technological, though. They are legal, financial, and human. For that, you still need a real person on your side.
This article is for general information and risk management discussion, not legal advice.