Blog

Understanding AI complexity through the Lens of Cybersecurity and Project Management Concepts

05.19.2020 | By Mark Speyers
 

Thanks to the movies and the media, the public understands AI as having grand potential intermixed with enormous complexity. The worst-case scenario is that AI takes over the world and either enslaves or kills off humans. It would be easy to dismiss this somewhat emotional sense of the risk of AI as being based mostly on the knowledge gap between practitioners and non-practitioners. However, in my decade of interacting with both groups, I would argue that even AI practitioners lack conceptual clarity on the actual risks of AI because they are less Hollywood-esque and more actuarial in terms of excitement level. We shouldn’t be surprised that the rise of AI didn’t lead to a corresponding risk framework to guide its progress. But the AI industry should be looking for one to apply.

Cybersecurity risk models apply to AI risk assessments

AI practitioners and non-practitioners could gain insight into the risks of AI by studying how risk management applies to the cybersecurity domain. In my experience, cyber folks have a much more deeply ingrained cultural understanding of risk that everyone in AI should consider adopting to some degree. Rather than recommend that you obtain a master’s degree in cybersecurity, I’m going to give you the cliff notes and my sense of how to organize the risk of AI to organizations.

Threat, vulnerability, and impact

The cybersecurity domain has a helpful conceptual framework for risk, combining quantitative and qualitative analysis. Quantitative risk is largely a MULTIPLE of three things: threat x vulnerability x impact. Threats are entities that gain something when you are injured—in any manner. Serious threats are entities with the intent and means to attack you. Vulnerabilities are what your enemy studies/uses to attack you most effectively. In the cybersecurity realm, these vulnerabilities arise mostly out of knowledge deficits, e.g., how to code software without introducing vulnerabilities or how to avoid unwittingly cooperating with your attackers through social engineering.

The last issue is the potential impact of an attack. What if your enemies got access to your most valuable data? What if you don’t even know what data is most valuable?

Unknown unknowns

Although project and program management are driven by quantitative results, a considerable portion of the work includes understanding the effort’s qualitative health. Project managers will understand if I invoke the Rumsfeld doctrine. Still, everyone else knows that life generally, and cybersecurity and AI, in particular, come with risks that can be categorized by degree of awareness. That is to say: there are known knowns (risks you know you know), known unknowns (to farmers, the weather is a known risk, but what it will bring this year is unknown), unknown knowns (things you thought you knew but didn’t) and unknown unknowns (something you don’t yet know you don’t know).  These can be abbreviated as KK’s, KU’s, UK’s and UU’s.

Anyone who manages risk daily as a function of their job or business will tell you the issues that keep them up at night are the unknown unknowns.  You think you’re doing all the right things and then terrorists fly planes into your building, or one of your drilling rigs in the Gulf of Mexico springs a colossal leak or your nuclear reactor near the ocean gets flooded by a tsunami. Of course, I’m highlighting the worst possible unknown unknowns, also known as disasters. But unknown unknowns extend across a broad spectrum to include even trivial matters.

Unknown unknowns are inherently scary even to those without overactive paranoia. If only there were a way to study unknown unknowns systematically, maybe we could get a night or two of sleep? Fortunately, our friends in the project management discipline have done all the heavy lifting and determined that there are 48 separate subcategories of unknown unknowns and 12 strategies to turn unknown unknowns into known unknowns.

What is the use of taking the Rumsfeld doctrine to a new level of complexity? Project managers track risks. In order to catalog risks, those risks must at least be known unknowns. In contrast, unknown unknowns don’t get treated in the traditional project management discipline. So, if the goal is to document risk, we need a strategy to convert as many types of unknown unknowns into known unknowns.

Re-Applying these Concepts to AI

To cap this off, let’s work with some of these concepts with a focus on AI. For any industry, AI represents an additional threat from your competitors to out-innovate your organization. There is a definite leap ahead potential here. As an example, marketing spend optimized by AI is a very powerful concept. In the hands of your competitors, AI is a real risk to your organization’s future.  What would be a related vulnerability? First is be a lack of aptitude for data collection, engineering, and exploitation. If this is an under-invested element of your organization, you should definitely take a step back and strategize how to correct course.  A second vulnerability is the lack of data science talent and technology. Time to get HR involved.

AI in the hands of your competitors is one thing, but what about in the hands of traditional enemies of humanity like criminal gangs, organized crime, terrorists, or state-sponsored agitators?  It isn’t too difficult to imagine these organizations using AI to target their victims better, whether through automated cyber phishing or training models that cause the automation to work better and more efficiently. In this case, AI is an arms race of sorts.

Being one’s own worst enemy is possible in the realm of AI, since doing nothing is always an option. The related vulnerability is lack of imagination. This takes us back into the Rumsfeld doctrine of understanding risk in terms of the KK’s, UK’s, KU’s, and UU’s. The UU’s are AI advancements no one in your organization is tracking. A KU might be the fact that your competitors and enemies are investing in AI, but you don’t know where they are placing their bets. Both of these represent surveillance issues.  A good bit of contemplation has been devoted to AI bias and the risk of profiling, but that too is a vulnerability.

It is fair to say that the impact of ignoring the advancement of AI is an existential one. In other words, by not attending to digital transformation and applying AI to mission problems, you risk becoming obsolete as an organization. This is because the entities that threaten your organization are surely incorporating AI into their solutions. AI makes the threats you face more formidable. AI is an escalation of the threats an organization faces.

Risk as a field of study is as old as insurance companies, long pre-dating our digitized world. AI is enjoying a lot of attention these days as a new digitization discipline, but practitioners often don’t think about the risks of AI. This is not the case in the cybersecurity and program management domains, which offer useful insights to AI practitioners.

Symphony AyasdiAI has been tackling risk using advanced AI for the past decade. Through SymphonyAI Government Solutions, we offer solutions to predict program/project health and detect fraud in both the commercial world and government (e.g., stimulus fraud, money laundering, and trafficking).  Ayasdi technology is used to reduce medical and cybersecurity risk by surfacing known unknowns and unknown unknowns with novel discovery technology backed by over 200 scientific publications.

If you are looking for ways to shore up the gaps in your digital transformation strategy, SymphonyAI Government Solutions stands ready to support you. Get in touch.

Thanks to the movies and the media, the public understands AI as having grand potential intermixed with enormous complexity. The worst-case scenario is that AI takes over the world and either enslaves or kills off humans. It would be easy to dismiss this somewhat emotional sense of the risk of AI as being based mostly on the knowledge gap between practitioners and non-practitioners. However, in my decade of interacting with both groups, I would argue that even AI practitioners lack conceptual clarity on the actual risks of AI because they are less Hollywood-esque and more actuarial in terms of excitement level. We shouldn’t be surprised that the rise of AI didn’t lead to a corresponding risk framework to guide its progress. But the AI industry should be looking for one to apply.

Cybersecurity risk models apply to AI risk assessments

AI practitioners and non-practitioners could gain insight into the risks of AI by studying how risk management applies to the cybersecurity domain. In my experience, cyber folks have a much more deeply ingrained cultural understanding of risk that everyone in AI should consider adopting to some degree. Rather than recommend that you obtain a master’s degree in cybersecurity, I’m going to give you the cliff notes and my sense of how to organize the risk of AI to organizations.

Threat, vulnerability, and impact

The cybersecurity domain has a helpful conceptual framework for risk, combining quantitative and qualitative analysis. Quantitative risk is largely a MULTIPLE of three things: threat x vulnerability x impact. Threats are entities that gain something when you are injured—in any manner. Serious threats are entities with the intent and means to attack you. Vulnerabilities are what your enemy studies/uses to attack you most effectively. In the cybersecurity realm, these vulnerabilities arise mostly out of knowledge deficits, e.g., how to code software without introducing vulnerabilities or how to avoid unwittingly cooperating with your attackers through social engineering.

The last issue is the potential impact of an attack. What if your enemies got access to your most valuable data? What if you don’t even know what data is most valuable?

Unknown unknowns

Although project and program management are driven by quantitative results, a considerable portion of the work includes understanding the effort’s qualitative health. Project managers will understand if I invoke the Rumsfeld doctrine. Still, everyone else knows that life generally, and cybersecurity and AI, in particular, come with risks that can be categorized by degree of awareness. That is to say: there are known knowns (risks you know you know), known unknowns (to farmers, the weather is a known risk, but what it will bring this year is unknown), unknown knowns (things you thought you knew but didn’t) and unknown unknowns (something you don’t yet know you don’t know).  These can be abbreviated as KK’s, KU’s, UK’s and UU’s.

Anyone who manages risk daily as a function of their job or business will tell you the issues that keep them up at night are the unknown unknowns.  You think you’re doing all the right things and then terrorists fly planes into your building, or one of your drilling rigs in the Gulf of Mexico springs a colossal leak or your nuclear reactor near the ocean gets flooded by a tsunami. Of course, I’m highlighting the worst possible unknown unknowns, also known as disasters. But unknown unknowns extend across a broad spectrum to include even trivial matters.

Unknown unknowns are inherently scary even to those without overactive paranoia. If only there were a way to study unknown unknowns systematically, maybe we could get a night or two of sleep? Fortunately, our friends in the project management discipline have done all the heavy lifting and determined that there are 48 separate subcategories of unknown unknowns and 12 strategies to turn unknown unknowns into known unknowns.

What is the use of taking the Rumsfeld doctrine to a new level of complexity? Project managers track risks. In order to catalog risks, those risks must at least be known unknowns. In contrast, unknown unknowns don’t get treated in the traditional project management discipline. So, if the goal is to document risk, we need a strategy to convert as many types of unknown unknowns into known unknowns.

Re-Applying these Concepts to AI

To cap this off, let’s work with some of these concepts with a focus on AI. For any industry, AI represents an additional threat from your competitors to out-innovate your organization. There is a definite leap ahead potential here. As an example, marketing spend optimized by AI is a very powerful concept. In the hands of your competitors, AI is a real risk to your organization’s future.  What would be a related vulnerability? First is be a lack of aptitude for data collection, engineering, and exploitation. If this is an under-invested element of your organization, you should definitely take a step back and strategize how to correct course.  A second vulnerability is the lack of data science talent and technology. Time to get HR involved.

AI in the hands of your competitors is one thing, but what about in the hands of traditional enemies of humanity like criminal gangs, organized crime, terrorists, or state-sponsored agitators?  It isn’t too difficult to imagine these organizations using AI to target their victims better, whether through automated cyber phishing or training models that cause the automation to work better and more efficiently. In this case, AI is an arms race of sorts.

Being one’s own worst enemy is possible in the realm of AI, since doing nothing is always an option. The related vulnerability is lack of imagination. This takes us back into the Rumsfeld doctrine of understanding risk in terms of the KK’s, UK’s, KU’s, and UU’s. The UU’s are AI advancements no one in your organization is tracking. A KU might be the fact that your competitors and enemies are investing in AI, but you don’t know where they are placing their bets. Both of these represent surveillance issues.  A good bit of contemplation has been devoted to AI bias and the risk of profiling, but that too is a vulnerability.

It is fair to say that the impact of ignoring the advancement of AI is an existential one. In other words, by not attending to digital transformation and applying AI to mission problems, you risk becoming obsolete as an organization. This is because the entities that threaten your organization are surely incorporating AI into their solutions. AI makes the threats you face more formidable. AI is an escalation of the threats an organization faces.

Risk as a field of study is as old as insurance companies, long pre-dating our digitized world. AI is enjoying a lot of attention these days as a new digitization discipline, but practitioners often don’t think about the risks of AI. This is not the case in the cybersecurity and program management domains, which offer useful insights to AI practitioners.

Symphony AyasdiAI has been tackling risk using advanced AI for the past decade. Through SymphonyAI Government Solutions, we offer solutions to predict program/project health and detect fraud in both the commercial world and government (e.g., stimulus fraud, money laundering, and trafficking).  Ayasdi technology is used to reduce medical and cybersecurity risk by surfacing known unknowns and unknown unknowns with novel discovery technology backed by over 200 scientific publications.

If you are looking for ways to shore up the gaps in your digital transformation strategy, SymphonyAI Government Solutions stands ready to support you. Get in touch.

Latest Insights

How to unleash generative AI’s potential in anti-financial crime
 
03.21.2024 Blog

How to unleash generative AI’s potential in anti-financial crime

Financial Services
Successful enterprise AI doesn’t just require great AI. It demands domain expertise.
 
03.20.2024 Blog

Successful enterprise AI doesn’t just require great AI. It demands domain expertise.

SymphonyAI
retail cpg connected retail
 
03.20.2024 Video

Connected Retail

Retail / CPG