How to talk to AI, so it can solve your problem
Part 3 of 3, By A M Howcroft
We’ve already decided that problem solving could be better if it was more like a science than an art, and that problems need to be categorized correctly before we jump into solving them, especially if we want AI to help us. Now we need to learn how to talk to a machine. Google has already taught us how to converse with a search algorithm, and we need to extend our skills so we can describe a problem to an AI algorithm. Luckily, Generative AI is offering an olive branch, making human to machine communication much easier to bridge the gap. Let’s explore how we might instruct an AI system to solve a problem.
The fast-turnaround problems are still yours.
To be very clear, we’re going to exclude categories of problems which a human is still best at solving – such as those that need a very rapid decision, like what to do when a tiger walks into your office. We will also exclude the ‘solving world hunger’ or ‘curing cancer’ challenges, that require international, cross-discipline, government, and private sector collaboration. Instead, we’ll focus on problems for organizations that are complex, but solvable, such as ‘How do I better manage inventory in my warehouse?’, or ‘Which products should I allocate to which customers, to maximize my profit and minimize food waste?’.
Start with a simple summary.
The best place to start is with a brief description of the problem or challenge. As the Queen of Hearts famously tells Alice, “Begin at the beginning and go on till you come to the end: then stop." If only things were that easy! I find the summary is often the last thing I write – how do you know what to summarize until it has been written? However, when it comes to defining a challenge, the summary is a good place to start, even if you edit this later (which you will). Start by writing a short paragraph, that encapsulates the essence of the challenge. Like this:
We need to improve truck utilization in our Midwest distribution centers. Presently, more than half the trucks are going out with only 60-70% of their capacity filled. Since we spent more than $10M last year on transportation, the savings could be substantial if we could improve utilization. Although, we still need to deliver in the required customer open windows, and without affecting remaining shelf-life on delivered products.
I know this doesn’t look very much like instructions for an AI system, yet, but we will get to that. We need to start by thinking clearly first, as humans.
Scale: how big is the problem?
Next, we need to describe the scale of the challenge. How many trucks are leaving the warehouse every day? How many customers are we delivering to? How many orders are on each truck? Understanding the size of the operation, working out how many moving parts are involved, is a critical part of the process. It also helps inform us about whether the problem is worth solving. The challenge described above would look very different if there were two vehicles leaving the warehouse every day, instead of 50 trucks. Again, this is a paragraph or two in plain business language, that lists out the key assets and their interactions.
Current Approach: how do we solve it today?
Once we understand the size and shape of the challenge, we look at how it gets managed today. Not to allocate blame for current underperformance, but to establish a much better understanding of the issues and the workarounds people use to accomplish their tasks. Talk with those most closely involved and document their roles, the software tools used, how long it takes, and learn the key decisions and how they are made. Be careful not to think about solutions yet – that job will go to the AI – our task is to document the challenge. Again, a paragraph or two in plain business language is enough.
Impact: if we solved this, how much value would it bring?
Let’s look at how valuable it would be to solve the problem. This is where we capture the costs involved and calculate potential returns. In the example above, we might look at the average cost per truck, and document any differences between our internal fleet and third-party carriers. We must estimate savings, even if we don’t know how to achieve them. For example, if we could increase the utilization rate to 90%, how many truck journeys would that save? What impact would it have on our $10M transportation costs? It’s also great to extrapolate – how much would the savings be per year if we rolled this out across the entire region? Again, documenting this really helps us understand the opportunity.
At this point, we have a good grasp of the challenge; we understand the problem, the scale, we know how we manage it today, and what the potential impact would be if we could find a better solution. Let’s translate this into something an AI-algorithm can understand.
Tables galore: how to think like an algorithm.
Everything we’ve done so far has been in plain business language. If you gave our simple text definition of the challenge to a mathematician, they would start to translate it into terms like objective function, constraints, and parameters.
AI-algorithms are fundamentally math-based, and operate the same way, following sets of clearly defined instructions. We therefore need to supplement our earlier descriptions with hard facts that can be tested mathematically. A great way to do this is by creating simple tables that list the following areas:
Goals & Metrics
Levers
Constraints
Desired Outputs
These tables are much easier to develop once we have completed our business language description of the problem – the tables are both supplementary and complimentary. For example, we can capture the goals and metrics like this:
Levers come next, and they are critical. Levers are the mechanisms by which we control the outcome of the process, in order achieve our goals. Ultimately, they are the things an AI-algorithm will adjust to find the optimal solution. e.g.:
Next we list our constraints. These often emerge when we talk to the people that deal with the current problem today. It’s a good idea to start this table early and keep extending it as we discover additional constraints. In our experience, the list keeps growing even after we theoretically have the problem defined. That’s because there are many hidden constraints, often known by only a few people who may not be part of the core challenge discovery team.
You’ll notice we don’t have a priority column for our constraints. Instead, we categorize them into hard (laws of physics, legal rules, etc.) and soft (can be broken to a mild degree), and guidelines (try to do this if at all possible).
Finally, we must document the desired outputs we need from the solution. In our case, we might ask for a Daily Load Plan report, that shows every truck, the orders it will contain, and the deliveries to be made. Perhaps we could also request a turn-by-turn route plan printed for each driver, or maybe we can have the route sent via an API to a mobile app. These choices let the AI-algorithm know what information it needs to generate as output. We can list these as a table again or include some sample or mock-up reports.
Extras: what else do we need?
There are two other critical items we must have:
Data. If AI is going to solve this problem, it will need relevant data. This doesn’t mean we have to spend two years building a data lake. We can use the same information the human planners use today – with the caveat being that the data must be clean and in machine readable format. Spreadsheets and SQL databases are fine, handwritten notes are not going to work. Sometimes, we do have to clean data to prepare it for machines to read.
The ‘do-er’: we can’t test a system, validate, or use it, without involving the person that will be the final user. Humans are still very much required! The AI’s job is to be a smart assistant, a co-pilot, an advisor. The human is the final decision-maker, but AI is giving the person better options to choose from. The do-er needs to be closely involved in the project.
Light the fuse and stand back.
Let’s summarize what we have created:
We call this a Challenge Definition. It’s a structured, rigorous way to define a challenge, making problem solving more of a science than an art, which is where we started this series. The approach is repeatable, applicable to any industry, and one of the key benefits is that practitioners get better over time, making rapid problem-solving a core competency of the individual and their organization. It has been intentionally created to get the best out of humans and AI working together.
Time for the acid test: if the Challenge Definition contains everything an AI-algorithm needs to solve a problem, can AI deliver a solution on its own?
Now…and soon.
At SWARM we take Challenge Definitions and turn them into fully working solutions in a matter of weeks, with a combination of our AI platform and a team of Data Scientists. The process is not fully automated, yet, but it’s very close…
This month we released a digital avatar called AVA that can interview people across an organization, in their preferred language, and automatically generate a Challenge Definition report. I’ll include a sample report and a link to the (free) community version of AVA in the comments below, for anyone that wants to learn more. We’re currently training AVA to liaise with IT, so she can request or access the data required to solve a specific challenge. Later this year, we expect the first half of the Challenge Engineering process to be fully automated. Then we need to generate a solution – and large chunks of that process are presently automated.
The SWARM Solution platform is already no-code, and automatically generates Operational Dashboards to manage solutions. Right now, we are still manually building data pipelines to perform extract, transform, and load operations on client data, and testing algorithms for performance, but these steps are primed for automation. There are only a few advancements required before AVA will be able to generate fully working solutions for many scenarios, with no developer or data scientist interaction. Given the pace of AI, my estimate is that full automation of the entire Challenge Engineering process will occur in 18 – 24 months.
This is the true vision of Challenge Engineering: chat with a digital agent who will ascertain your problem, and within hours, offer a fully working solution that is more efficient than your current process and integrates with your data, saving millions of dollars. This end-to-end process has not previously been possible, but with advances in AI, software development, and data science, it’s moving rapidly out of science-fiction and into reality. While SWARM is the first Challenge Engineering platform in the market, we will not be the last. As AI gets better at solving problems, our responsibility shifts to get better at defining them.
We believe the near future will see AI partnering with humans to define challenges, and then automatically generating fully working solutions.
P.S. AI agents will collaborate in teams… look out for an article on that front soon!