What I Learned in creating a program about AI & Business at Hyper Island (part 2/3)

HYPER ISLAND
8 min readMar 31, 2022

--

Author: Dano Marr

Part Two: What is AI’s problem, really?

This longread is published as a three-part series. I suggest you check out part one before continuing.

This part is about the challenges of AI.

So now let’s get on to the human written part of this article.

Technology

What is AI?

AI (or artificial intelligence) simply put is an umbrella term for a bunch of computer programs that are great at doing tasks at a human level or better. We have been playing with AI since the 1950s, and every time we get to know these computer programs a little bit better, it starts to lose a little bit of that fear factor, and it also loses a little bit of that magic, and becomes a little bit more just like what it is: a tool. And we stop calling it AI and start calling it software. That’s the thing. That’s really interesting about AI, as it’s this thing that’s happening in the future, but as soon as we meet it and start to understand it, we just refer to it as a tool. It becomes just that — a tool. And the purpose of tools is to be able to help us solve problems.

So, what is the problem?

That’s the question that businesses are asking, and that is one of the things that is super important for us to be able to figure out. What is the problem? Can we define it, and put boundaries around the problem? Because it is in framing the problem that we can understand the context of what needs to be done. It isn’t until we understand the context and understand the problem that we can begin to think about potential solutions for the problem. A lot of times, the problem is that there is a waste of energy. There is a waste of time, a waste of resources, that could be optimized. That is the promise of AI: that there is potential to optimize these things.

What is the value of time? Especially what is the value of a person’s time? If there’s a doctor at a hospital, who is spending half of her time filling in forms, wouldn’t it be a better use of her time to be able to meet more patients or connect with other hospital staff, or even to rest a little bit? So she can bring as much energy as she can to her work. This is something that AI can help with by automating a lot of the form filling that happens at hospitals. But this could also be not just on a human level. Amazon, for example, has completely automated their customer fulfillment center using a fleet of robots and a bunch of algorithms that automatically figure out the best way to get from A to B. So that way the delivery happens today, rather than next week. The challenge is that human beings — smart as we are — take a lot more time to come to our conclusions than a computer that has been optimized to find the best route.

So what is it that we as human beings are going to be doing? What is it that is important for us? Well, there is a very large gap between people who understand AI and people who do not understand AI. And this is a very essential and meaningful problem. When we have things accelerating very fast (and there are people who do not understand what it is that’s happening, or how to engage it and be a part of this change) that is something which is very important. There is a language in every area that you go to. In any domain or sector or field, there is a language of expertise, and there is also a language of AI.

Language of AI

Our part in this picture at the AI Business consultant program is being people who are able to translate the language of AI to “another language”. It could be to a board of executives at a manufacturing company or a real estate agency, or it could be for somebody working at a bank. Not everybody who is in leadership necessarily understands what AI is and how to speak the language. What is possible and what is not possible with it? We need these bridge builders; people who have domain expertise in one area, but have also gotten to understand a little bit about what artificial intelligence can do for a business.

This role of communicator is really essential. Somebody who’s able to help somebody in the exec board communicate with the data scientists, and communicate with engineers, and to be able to talk with the project manager and be able to talk with salespeople and customer success people and marketing people and be able to talk with developers. All of these fields are related, and they have developed their own shortcut language and ability to communicate at an expert level inside their domain. That is making it very difficult for people to understand one another. And that is the role of an AI business consultant to be able to identify: what is the problem, and to be able to talk with anybody at any level about the problem from their point of view… And to be able to bring these things together, so that way we can unlock new potentials and optimize our time, energy, and resources.

Processes.

This brings me to the “processes” part. All of these different domains are connected. All of us are all connected. We are part of systems, living systems, dynamic systems. A lot of people think about an organization and think of the org chart. When in reality what we really are is a lot of different relationships that are organized. So how do we organize? And how are we related? This is something that is fundamental to the way that we choose to work. We also have the ability to understand how systems are built. How do they work together? And once we understand it, we can start to influence it.

I think the fundamental question here is, “how do we relate?” Right now, we’re relating through the medium of an article. But you and I have a relationship of some kind outside of this article. And not just how do you and I relate — but how do I relate to myself? How do I relate to my tools? How do I relate to my company? What are all the relationships happening beyond me? These relationships are the things that create the systems that we live in. Being able to understand systems is essential for being able to participate and interact in the world today. It also is super important for being able to engage with questions such as: “What is AI going to do to us?” “What is it going to do for us?”

This is something that I think is really important: understanding relationships and that all of us are connected. All of our things are now connected. In a weird way, it’s almost like we’re going back to animism. The thing is, since everything is connected, the choices that we make, what we choose to focus on, where we choose to put our energy all have an effect outside of us. Things we choose to focus on grow. Whether it is an action that I choose to take on a personal level, a project we choose to take on as a team, or an initiative we choose to move on as an organization. These things have ripple effects that will impact not just our customers, but our customer’s customers, and their families, and their communities, and their groups. Not just human level, we’re also talking about affecting the environment, and different ecosystems.

This is what’s so crazy about today — and also so exciting — it’s that the choices that we make are active participants in an emerging system. And if we’re going to be influencing the system, it’s really important that we get good data.

This brings us back to AI. When I talk about good data, I mean it in a couple different ways. When we’re defining the problem, we need to cover a lot of different angles. And we need a lot of different perspectives. Otherwise we’re going to be leaving ourselves open to blind spots and ignorance. That could have effects later on in the system.

It is super important that we have a variety of perspectives, people from different backgrounds, from different points of view on the problem who are employed at different levels of the organization. People who are being served, people who are working on the front lines. All of these people have a unique vantage point that is very meaningful towards delivering a clear understanding of what the problem really is.

When we get into training the AI, we need data that is clean and free from bias, because the quality that goes into the system is going to affect the quality that comes out of the system. This is where feedback is super, super important. How on Earth are we going to know what our ignorances are? If we’re not open to feedback, one of the things that is essential to developing as a team or as an organization (working with or without AI) is the ability to pick up, receive, and integrate feedback in order to improve the quality of the processes going on.

This is adaptability.

Our ability to adapt comes from doing things and then being open to receiving feedback and from there deciding do we change behavior? Do we change our attitude? Do we change the model? Because whether we’re talking about an AI model, which will adjust the results of the output, we could also be talking about the mental model that we have as a human being. What is the way that we look at the world this again brings us back to the language question. “How do we relate?” The way that we look at the world is going to filter the information that is coming back to us and ultimately influence the kind of choices that we’re going to make.

In order to listen to different perspectives, we have to become fluent in different languages, or at least sensitive to understanding when you’ve entered a new area and receptive to the perspectives that are there. The fundamental skill of communication is about finding a way to connect and understand each other so that we can work together on problems that really matter.

This is where responsibility comes into the picture. We are responsible for what we focus on, and that responsibility extends beyond just “what do I choose to do as an individual?” “What kind of projects do I choose to work on?” “What kind of causes do I choose to champion?” This is also a team level responsibility, like, “what do we stand for?” Or as a company, “What is our mission?” “What kind of initiatives are we going to support?” And why?

When it comes to governance and ethics in AI, these are our safeguards for why and how we should implement our behavior changing recommendation algorithms, or if we’re going to deploy a virtual agent that is collecting information on customers that they’re talking with, without maybe their knowledge…does that person know that they’re even talking to a script? Our ethics and governance ought to be grounded in our deep-rooted values, shared values, because our context is changing faster than our laws are able to keep up. Everyone has good intentions, but our blindness to how our choices affect the systems we’re all part of can sometimes lead us down to a path of more problems.

Our values shape our words. It is said, “words shape worlds,” the words we use are how we cooperate with one another. And the way we communicate comes from the way we see the world.

In the next part, I will talk about the world we imagine together, how to approach new and scary things, and how I wrote this article.

--

--