Let’s start with a confession: I am a nerd.
There, I said it. And not just any nerd, but a policy wonk who thrives on the minutiae of governance, data science, and the ever-evolving landscape of technology. I’m sure it needed no confession anyhow, but better out than in, I suppose?
It’s a title that fits me like a glove, and I’ve come to embrace it wholeheartedly. There’s something incredibly satisfying about diving into the depths of policy frameworks, understanding the nuances of decision science, and exploring the potential of AI in a field as critical as wildland firefighting.
This week, I met with an emergency management knowledge manager, who reignited my curiosity streak (thank you, John!), particularly around the need to be an optimistic skeptic when approaching new technologies. The skeptical optimist believes things will work out, or can be better, but doubts the conventional wisdom or blind acceptance to it. While madmen, admen and inventors may all befall this title, it’s not necessarily a bad thing — curiosity drives ingenuity, while tradition makes the past timeless. But I digress — the point is, we had a lot of questions about ethical design, development and operationalisation of AI.
This was the same wormhole I explored over the grazing table at ACEFA with one of the state’s foremost emergency management leaders and a remarkably insightful responsible data governance ethicist who challenged me to explore both decision science and data validation approaches. Therein, a lightbulb moment for the project: understanding decision science is quickly becoming as important as understanding data science, engineering, use, and governance. Decision science is the backbone of effective AI implementation. It’s not enough to have the data; we need to know how to use it to make decisions that are ethical, efficient, and impactful.
At a glance, firefighters often rely on instinct and past experiences when responding to operational incidents. This approach, while sometimes effective, does not always align with established decision-making models and can lead to human error, which is a significant cause of firefighter injuries, property and cultural heritage loss, and irreversible ecological damage.
Resultantly, I’ve gone down a relevant few rabbit holes on decision science to understand how us firefighters actually make decisions. Here are a few models which I’ve started following up on:
- Game Theory: To simulate different fire management strategies and their outcomes.
- Probability Theory: To calculate the expected value of different firefighting actions under uncertainty.
- Analytic Hierarchy Process (AHP): For analysing decisions and their consequences, such as the trade-offs between immediate firefighting efforts and long-term ecological impacts.
- Decision Trees: To understand risk by breaking down possible outcomes at each stage of a fire event.
At the end of the day, the biggest insight from this wikihole has been that decision science bridges the gap between raw data and actionable insights. It helps us navigate the complexities of AI, ensuring that our decisions are not just data-driven but also contextually relevant and ethically sound. This understanding is crucial, especially in a sector like wildland firefighting, where the stakes are incredibly high, and the margin for error is minimal — and where there must be significant checkpoints to manage the always evident risks in the work we do.
On a different note, one of the most significant insights I’ve encountered from spending my weekend at ACEFA this weekend, and consultation sessions as of late is in the widespread misunderstanding of AI across the sector. This misunderstanding manifests in two distinct ways: over-enthusiasm and complete aversion.
On one end of the spectrum, there’s a rush to implement AI without proper planning or understanding, driven by the allure of cutting-edge technology.
On the other end, there’s a fear of the unknown, leading to a complete aversion to AI and its potential benefits.
This dichotomy is problematic. Over-enthusiasm can lead to poorly planned implementations that fail to deliver on their promises, while aversion can result in missed opportunities to enhance our capabilities and improve outcomes. Bridging this gap requires education, communication, and a balanced approach that acknowledges both the potential and the limitations of AI.
In essence, AI should be viewed as just another decision-support system, much like PowerBI creates visualisations from spreadsheets or how weather forecasts inform a day’s fire danger rating. AI is a tool that augments human decision-making by providing data-driven insights and predictive analytics. It doesn’t replace human intuition or judgment but enhances our ability to make informed decisions under pressure.
As I navigate this complex landscape, I’ve found myself distinctly falling into the role of a technologist rather than just a technology enthusiast. This shift has been both surprising and gratifying. As a technologist, my focus is on understanding the practical applications of AI, its integration into existing systems, and its impact on decision-making processes. It’s about moving beyond the excitement of new technology to a more grounded and practical approach.
Interestingly, this shift has also led me to ‘detechnology’ my life in some ways. I’ve reverted back to being a luddite: a Casio watch, significantly scaling back my social media presence, and embracing a more minimalist approach to technology. This balance has been refreshing, allowing me to focus on what truly matters without the constant noise of digital distractions, not to mention, it gives me much more time to stare at the wall.
(I’ve read a lot of articles lately saying that boredom is needed for creativity and inspiration, and while this may be the case, it’s much more interesting than doing engagement economics at work…)
On a more personal note, I must acknowledge the incredible support I’ve received from colleagues, friends, and mentors throughout this journey. Each time I open the document and dive into the complexities of AI in wildland firefighting, I am challenged and inspired by the collective knowledge and insights of those around me. Their encouragement and willingness to unpack the issues with me have been invaluable.
This support has been a constant source of my motivation and curiosity, pushing me to continue refining my understanding and expanding my horizons. It’s a reminder that while the journey may be challenging, it is also incredibly rewarding, thanks to the people who walk it with me.
As I continue to work on my paper, I am reminded of the importance of a strategic and thoughtful approach to AI implementation. This paper is not just a collection of insights and recommendations; it is a reflection of my journey, my learnings, and my commitment to making a meaningful impact in the field of wildland firefighting.
The journey ahead is filled with opportunities and challenges. There is much to learn, much to explore, and much to achieve.
Anyhow, on a less nerdy note, I had a delightful time in Tamworth, and resurrecting the cowgirl from the pains of concrete and watching some rootin’ tootin’ hillbilly line dancing of the older (and spiritually older) fire folks. I always forget how amazing the fire family is, but then I remember how many people have adopted me from across the state, like Kazza.
I also potentially laughed a bit too hard at the notion of Uncle Ben doing a red light run on the way to Hurry Up and Wait at Sydney Airport for the Tenterfield overnighter in this 4WD Battle Bus too.
Hope you enjoyed the update.
Wondering more about my project, feel free to read the below or drop me an email.
Until the next one!
Elle.