In this article, I'm going to tell you about automating corporate strategies using artificial intelligence, and look at some recent progress in automatically generating strategies in the face of uncertainty.
Every day, progress in artificial intelligence is addressing tasks currently performed only by humans, and it's worthwhile to take a short-term view of what this all means to your company.
Games like chess have been tackled by artificial intelligence with amazing results, but there was this big gap between those games - where everything about the game state and consequences is known before making a decision - and the reality of life where, like poker, there is only a little bit of information available to the decision-makers, and the quality and quantity of information used to make decisions varies wildly. We humans face this situation of high uncertainty every time we cross the street or eat a hamburger, but it doesn't seem to bother us. Until recently, computers have had a lot of trouble dealing with games that give the decision-maker incomplete information about the state of the game. Also, developing strategies in real-time is harder than deciding if you want to bet or fold. Strategic plans can have multiple steps that need to be properly sequenced, and these plans can include contingency plans.
Artificial intelligence is very hyped up, and for good reasons, but many pop culture information sources lose the "how" of it all, and instead focus in on the dream of what may come next, in some far-off future. Vaporware is common in the industry. It's not a good state of affairs because regular people don't see the connection between the research and the resulting products, and people fear what they don't understand.
I am a practitioner in the artificial intelligence field, and my problem with futurism fever is an endless focus on the distant future devoid of a simple and rational view of reality in the here and now. The tyranny of the rocket equation held back the futuristic promises of spaceflight for a generation, and the progress of artificial intelligence is similarly limited by the inertia that all disruptive technology faces. The fear of change brought on by progress in the artificial intelligence field should not lead society directly to extreme solutions like universal basic income or strong regulation of artificial intelligence.
Artificial intelligence as it is developed today, is primarily programming, data gathering, and mathematics. It isn't sexy and it works poorly at first. Sometimes it doesn't work at all. The scary results you see in demonstrations are the product of a lot of hard work to hide the shortcomings of a narrow machine intelligence. I like to think of the artificial intelligence field like "Charlotte's Web", in the sense that Wilbur (the artificial intelligence agent) gets all of the attention at the fair, but Charlotte (the programmer) stays up all night spinning the web. When Wilbur gets all the attention at the fair, as was intended, you have to ask yourself what exactly the pig did to deserve the attention. Why didn't the farmer market a magic spider instead of a magic pig? Well, the short answer is that the marketers want to promote pigs, but the deeper answer is that too few information sources are looking behind the curtain to explain the reason for the cool demo, ruining the magic trick but enlightening the audience.
I don't mean to give "real" futurists a hard time though. Nick Bostrom's excellent book "Superintelligence" is worth reading, and even just a watch of his TED talk on the potential future dangers of artificial intelligence could change the way you think about these things. Real research is ongoing on the far future of artificial intelligence, on things like the singularity, and artificial general intelligence, but that is simply not
relevant to your real life right now. It might be someday, but it isn't today. What I want to share with you in this article is the here and now; The gritty reality of compiling code and deploying solutions that companies use to make money, save money, and save time. Specifically, let's look at recent progress in strategic thinking developed by artificial intelligence. Yes, artificial intelligence really is being applied to model decision making of governments and corporations. An example that comes to mind right away is a machine learning model of China's decision making called the Policy Change Index.
Cool Things Are Reaching From The Lab Into The Real World
I am claiming that society should be getting more meaningful information about what's happening right now in the artificial intelligence field. Let's back that up with an example: Artificial intelligence agents are getting better at hard problems they couldn't solve before, and these stronger capabilities will matter. I'm talking specifically about DeepMind's AlphaStar, and many similar systems that are upgrading the plumbing and the brains that run the world economy. This is going to impact the way services are delivered because the strategy part of delivering services has until recently relied very heavily on humans at keyboards wearing headsets. It also impacts the finance sector, where 80 percent of market activity is algorithmic trading agents, and automated strategy plays a major role in these agents moving markets.
DeepMind's AlphaStar has recently been playing online StarCraft 2 matches in Europe, against human opponents who don't know they are playing a computer. The base technology within DeepMind's approach to solving Real-Time Strategy (RTS) "imperfect information" game theory is a deep neural network trained on past games and perfected by playing against itself to get even better. Self-play facilitates learning beyond simply copying decisions made in past matches. The core of the approach is a system that uses a mash-up of supervised learning and reinforcement learning, but the key concept to absorb is that a complicated RTS game only humans could play is now being solved very comprehensively by an artificial intelligence system. A predecessor of AlphaStar for playing the game Go was called AlphaGo. Unlike AlhpaGo, AlphaStar needs to develop and execute strategies based upon an incomplete set of information. That's a big step forward, into a more businesslike and realistic domain.
The first set of AlphaStar matches earlier this year were against professional players in a private controlled setting, relying on the ability to perform an amazingly high (superhuman) number of actions per minute to beat the humans. Playing such a hard to play incomplete information game was super impressive, but they are not done just yet.
Developing agents is sort of like building a rocket and watching someone else fly it around. Over the past few months of following AlphaStar news, I've been entertained that Casters watch AlphaStar play like programmers debug their artificial intelligence agents: they look at the successes and marvel at the ideas the agent came up with, and then look at the failures and try to figure out "Why did it do that?" Super wonky low-level in-game mistake details like bad building placement and scouting strategies then tie into super wonky low-level in-agent details like analyzing the sequence of states that led the agent to learn to do what it did.
DeepMind is now trying to make AlphaStar even better, by making it play under constraints more like those imposed on a human. DeepMind is looking to train agents like AlphaStar to beat humans at tasks only humans are good at, and they are also making an effort to do it in a way similar to how humans do it, without relying on click spamming the game, or other unfair superhuman "cheats" that rely on being a computer to work. I am very sure this is not the solution to all problems humans solve, but it is cool that some subset of all the stuff we humans do is slipping into the realm of automation. Right now there still seems to be a limit to the set of maps AlphaStar can play, but progress is being made, and we can see that progress marching forward in real-time. The initial demonstration of the system played only one of the three StartCraft 2 races (Protoss), but the newer version works for all 3 races in the game. To me, this is the exciting part of the artificial intelligence field today. Every day the limit of what is possible is going up, and there does not seem to be an end in sight.
Assuming the aforementioned progress on machine learning-based strategy generation is within reach, what does an automated corporate strategy look like? In my experience, government and corporate projects contain limited scope requirements to model one part of corporate strategy. Be it credit risk models, recommender systems, customer segmentation, legislative risk models, or trading algorithms, these scope-limited approaches benefit from being easily abstracted into a box in a flowchart with a name on it. Rather than holistic solutions that model a whole company, the most popular kind of project today is a tool that serves management. Tools for forecasting and decision making are very popular right now, and I predict that at some point these smaller bites at the elephant will grow into holistic solutions. The pushback I have heard and seen is that ERP systems promised to provide this sort of holistic strategy generation but didn't, and that the technical debt in the current infrastructure is best addressed by incremental changes in how things work.
Automation in corporate strategy has been around for a long time, but with recent advances in artificial intelligence, it should get even better. The ideas above should lead you to the inevitable conclusion that corporate strategy development is going to benefit from more automation, and specifically from artificial intelligence. Companies are organized a certain way, with specific Key Performance Indicators (KPIs), and plans for improving the KPIs. Expect artificial intelligence to creep into corporate strategy discussions right about now.
There is so much work still to do
To build a system that generates strategies, a good starting point is to develop a simulator to define the boundaries of the world where the agent can play around and make decisions. The simulator also needs lots of data on past decisions and their outcomes, in order to gain some experience. This process of building a simulator also involves deciding how much of the world to simulate. The approach DeepMind followed to avoid the large gap between dumb newborn and beginner agents was to first teach the agent to mimic the historical data, and then to pivot into playing against copies of itself once it was sort of on the right track.
Some key challenges holding back the broader adoption of artificial intelligence for autonomous business strategy development and execution are interpretability, dataset handling tools, automated machine learning, and bias.
Let's first talk about interpretability. We humans need to understand what the machine learned from the data, how it thinks and why it makes certain decisions. Researchers are now working to address these interpretability issues. Some recent progress interesting work on this topic is "A Unified Approach to Interpreting Model Predictions" and their open-source library shap.
Beyond interpretability, another challenge is wrangling datasets for training these artificial intelligence agents and labeling the data without too much human effort. Some major efforts in that direction should address most of the problem. One noteworthy project for labeling, transforming, and organizing datasets is snorkel.org, which has a wide corporate user base and US Government support. Pre-built datasets and models are also becoming more available as they are put into the public domain by researchers, corporates, and governments adopting open data policies.
A third challenge on top of interpretability and dataset preparation is automating the process of building machine learning models, which today often requires a data scientist, a machine learning engineer, and a DevOps engineer, as well as a project manager to keep the trains running on time. There are many active automated machine learning projects such as TPOT, which aim to remove these humans from the process as much as possible. This automation is not yet a solved problem, and it gives you a realistic sense of the strings artificial intelligence researchers are pulling on.
A fourth challenge I want to bring to your attention is biases. Artificial intelligence systems are very good at learning and perpetuating bias that exists in observed data. It's a big and unsolved area of machine learning that is holding up some key business processes from easily adopting strategies generated using artificial intelligence. It is the process of verifying that certain biases do not exist in the generated advice that adds significant cost to machine learning projects of this nature.
Rather than waiting for all of these problems to be fully solved, the world is moving ahead with adoption of artificial intelligence at scale, and I believe the training wheels will come off slowly, as these systems become more commonplace within corporate infrastructure.
This article and those that follow should help you understand how the field of artificial intelligence is coming along, and help you to get a sense of what this all means to you right now. Let me briefly summarize the 5 main takeaways: