Black boxes are actually orange, and this is why we have trust issues.
Last week, I was asked to speak at the 2024 AFAC Conference in the Innovation Stream. While there was an abundance of internal screaming, I’m relieved that the focus is not solely on research but also on the preliminary understanding of AI as both a technology and a tool.
Admittedly, it was a dramatic imposter syndrome moment, but then I remember that while I might be book smart, I lack facial recognition and social skills, so there’s a bit of a balance after all. I also realise the more I talk about my excitement for guidance and regulation in innovation, the more people back away slowly.
AI’s journey began in the mid-20th century with the advent of computers capable of performing tasks that typically required human intelligence. From simple calculations to complex problem-solving, AI has come a long way (like me and my mental health, zing!). Early AI systems were rule-based, relying on predefined algorithms to make decisions. However, the advent of machine learning and neural networks in the late 20th and early 21st centuries revolutionised AI, enabling systems to learn from data and improve over time.
Today, AI is ubiquitous, from virtual assistants like Siri and Alexa to sophisticated algorithms that drive autonomous vehicles. However, misconceptions abound. Many people fear that AI will replace humans, leading to job losses (“They took err jerbs!”) and a loss of control (“I’m sorry Dave, I’m afraid I can’t do that.”). In reality, AI is designed to augment human capabilities, not replace them. It excels at processing vast amounts of data quickly and accurately, providing insights that humans might miss.
Think of it like the shopping list you swore you didn’t need to write down.
The integration of AI in wildland firefighting presents unique challenges and opportunities. While it’s not inherently apparent to those on the fireline, I can attest that their primary concern is safety.
So, the question arises: how do we design and operationalise AI to maintain the community’s trust?
Operationally, I’m hearing that AI should not be a decision-maker but a tool for options analysis. It can support decision-making by providing various response options rather than recommending a specific response. This ensures that human expertise remains central to wildland fire management. AI can analyse data from multiple sources, predict fire behaviour, and suggest strategies, but the final decision should always rest with experienced personnel.
A critical aspect of integrating AI is balancing its capabilities with human expertise. AI can process data at unprecedented speeds, but it lacks the nuanced understanding that comes from years of experience. Therefore, it is essential to define AI’s role clearly. It should serve as a support tool, enhancing the decision-making process rather than replacing it.
One of the significant concerns with AI is its transparency. The term “black box” refers to AI systems whose internal workings are not visible to users. This lack of transparency can lead to mistrust, especially in high-stakes environments like wildland firefighting. On the other hand, a “glass box” design allows users to see and understand how AI makes decisions.
(A fun fact at this point — black boxes aren’t actually black. They’re orange, for visibility after aviation incidents, and I don’t know what annoyed me more — the fact that they’re actually orange, or that nobody really knows why it’s called that when they have never been designed to be black.)
As one person I consulted stated, “People fly all the time but they don’t know how the plane flies.” This analogy, besides it being more aviation talk, highlights that while not every decision needs to be dissected, there must be inherent trust in the decision-maker. However, this trust can be broken, as seen in inquiries or royal commissions following incidents.
To maintain trust, it is crucial to know how AI decisions are made. The “glass box” approach should be available for precise dissection, similar to how human decisions are documented through logbooks, Incident Action Plans (IAPs), tasks, and radio chatter. While black boxes become relevant when things go wrong, the hidden workings of AI technologies — models, output designs, parameters — are the critical foundations for trust.
At this stage, I’m starting to sense that an ideal state is AI as a trusty assistant, analysing data from multiple sources, predicting fire behaviour, and suggesting strategies but having experienced firefighters who call the shots at the end of the day.
It’s like having a GPS that suggests multiple routes, but you still get to choose which one to take based on your knowledge of the area and your gut instinct. Sometimes, you disobey it, much to Siri’s chagrin, but you get there in one piece, perhaps a little earlier, perhaps a little later with your marriage barely in tact.
In conclusion, let’s embrace AI as a valuable tool in our firefighting arsenal, but let’s do it with a healthy dose of perspective. We’re not trying to create a world where robots fight bad fires while we sit back and roast marshmallows with the good fire. We’re aiming for a future where AI enhances our abilities, supports our decisions, and helps us keep our communities safe.
But like a majority of things, this is just one of the many, many, many rabbit holes I’ve been diving down.
I just watched a whole bunch of my favourite science fiction movies recently.
You can’t tell though, right?