The DNA of Work

Demystifying Artificial Intelligence: what you need to know now

August 01, 2023 Season 1 Episode 51
The DNA of Work
Demystifying Artificial Intelligence: what you need to know now
Show Notes Transcript Chapter Markers

Ever wondered if we should fear a malevolent AI takeover? Reconsider what you know about artificial intelligence (AI) as we host a riveting conversation with Dr. Mike Jackson. As a renowned expert in the field, Dr. Jackson clears the air on misconceptions surrounding AI while shedding light on its potential and the importance of unbiased data and decision making. We also debunk the Hollywood-inspired myth of evil AI entities commanding the world.

We don't stop there but shift gears to discuss the practical applications of AI tools. Discover how AI can be a game-changer in performance appraisals as we delve into ChatGPT with AWA's Director of Consulting, Brad Taylor, and Founder, Andrew Mawson. Not just that, we also recognize the potential of AI in understanding organizational dynamics. The conversation doesn't end without emphasizing the role of collaboration, trust, and leadership in embracing AI and exploring the idea of a 'Chief Workplace Officer' taking the lead on strategizing AI utilization. So join us on this enlightening journey, as we navigate the intriguing world of artificial intelligence together.

Mentions:

 

AWA Host: Karen Plum

Guests: 

  • Mike Jackson, Non-Executive Chairman, Pre-Empt.life, Founder of Shaping Tomorrow, strategist, change agent and business coach, ResultsPlus.ai
  • Andrew Mawson, Founder & Managing Director, AWA
  • Brad Taylor, Director of Consulting, AWA

 AWA Guest details: https://www.advanced-workplace.com/our-team/ 

 

CONTACTS & WEBSITE details:

AWA contact: Andrew Mawson 

AWA Institute contact:Natalia Savitcaia 

Music: Licensed by Soundstripe – Lone Canyon



Want to know more about AWA?

Thanks for listening to the DNA of work podcast

Karen Plum:

Hello there. Unless you've been living under a rock this year, you can't have escaped the arrival of Chat GPT-4 and all the furore about AI that seems to have erupted ever since. If, like many people, you're worried about what it means for your organization, then settle down for a back-to-basics romp through artificial intelligence - what it is, what it's not, and how to figure out what to do with it. Welcome to AWA's podcast, which is all about the changing world of work and trying to figure out what's right for each organization, because we know that every one is unique. We talk to people who have walked the walk, who've got the t-shirt and who've learned lessons that they're happy to share with us. I'm your host, Ka ren Plum, and this is the DNA of Work. We all know to some degree that we're interacting with artificial intelligence in our daily lives. It's just that maybe we hadn't realised or we hadn't thought about it that way.

Karen Plum:

I recently read an article about the arrival of Chat GPT, which suggested that organizations are responding in one of three ways. B y ignoring it, by banning its use, or by centralising it - in the way that they have other types of technologies, putting in place constraints and rules around its use.

Karen Plum:

The author, Ethan Mollick, a Professor at the Wharton School of the University of Pennsylvania, felt that none of these strategies were likely to be effective. If you drive things underground, people will fear the punishment of using them, but will probably use them anyway. I wanted to explore this topic in more depth and invited an expert to come on to the show, having recently heard him speak at an AWA Institute event. He is Dr Mike Jackson, an internationally renowned expert foresiter, strategist and change agent. Involved in several ventures he also founded Shaping Tomorrow, which you'll hear him refer to during our discussion. Later we'll hear from two of my AWA colleagues who share their perspectives, but for now, let's dive into my chat with Mike. I wanted to go back to basics, so I asked Mike what do we mean by AI or artificial intelligence?

Mike Jackson:

AI stands for artificial intelligence, and it refers to the development and creation of internal intelligent machines and computer systems that can perform tasks that typically require human intelligence. That sort of technology will simulate and I mean simulate, simulate cognitive human processes, which will include learning, reasoning, problem solving, perception and decision making, enabling machines to understand, learn and interact with their environment.

Karen Plum:

Right. So there's quite a lot in that. Are we saying that the AI can't think for itself?

Mike Jackson:

It doesn't really think for itself in the way that humans do, it's not creative, it's not sentient, it doesn't understand emotions, but it can make reasoned decisions. I'll give you an example. Our own machine uses natural language programming. The machine has been trained to recognize certain topics, certain phrases, certain ways of thinking, and it can read a report, an article or a PowerPoint in literally a few seconds and extract from that what it wants to know. And it will put that into a database and then we can ask it questions like - what's the future of this, or what's the future of that, without having to read the original articles or reports.

Mike Jackson:

It's a dumb robot. Ours is a dumb robot, it's only obeying instructions given to it by us, having trained it in a particular way. And similarly, things like ChatG PT-4, they're trained on human input. So the way that Chat GPT-4 started was they began by taking Reddit, which is an online participatory program, and they mined Reddit for all of the human conversations - millions of them - and they used that to train the robot to say what is the next character in a word, so that the machine would know that if it saw, let's say, queen. And it saw QU.

Mike Jackson:

There are only limited choices to come after the QU, and likely it would be that it would then enter the next one would be an E, and the next one would be an E, and then it would say Queen. So it uses random probabilities to be able to determine what is the next character in the sentence. It will not always get it right, because it's using probabilities, but it's using human language to be able to determine the most likely next letter is. So it's obeying instruction, as humans have that innate ability to be able to do more than that, which is to perceive things from feeling things, feeling heat, listening to things, etc. Etc. Which robots don't have.

Karen Plum:

I wanted to ask you how the term artificial intelligence or AI, is typically misunderstood and misinterpreted.

Mike Jackson:

People often overestimate the capabilities of AI because, although AI has made incredible advancements, in the last few months particularly, it still lacks the comprehensive understanding and common sense reasoning that humans possess. It excels in specific domains and specialized tasks, but it struggles with generalization and context awareness. There's also a misconception that AI systems like robots or computers have autonomous intelligence similar to human intelligence. However, AI systems are only as good as the data they are trained on. As I already said, they lack consciousness, self-awareness and independent decision-making. Those are the main reasons why it's misunderstood.

Mike Jackson:

I then think that people misunderstand when they see these people writing one-sided, biased processes that says everybody's going to lose their jobs. That creates a concern that AI will replace people on a massive scale. I also think that people don't understand that robots can be as biased and unethical as we are and that there's still a lot of work to be done to unbias and un flaw the data that they're using and that can perpetuate biases and discriminate in humans. And then, lastly, people use science fiction, particularly things like the Terminator, to worry about malevolent or super-intelligent entities that can take over the world, but I don't think that that is universal. I think there is a small body of people who think that, because they think it's coming very, very quickly. Most people are not worrying about that day-to-day, in my opinion.

Karen Plum:

I'm very taken by the idea that there is bias in these tools and I guess it's not too surprising if they've been trained by us. Then we have a myriad of biases. But I think you were explaining on the webinar that you can ask one AI device to check the biases of another.

Mike Jackson:

I started out building this unique research document, thinking that I could build a strategic plan for any business, any country, any competitor or any organization by bringing all of the strategic foresight processes together in one robot. And I did that and it worked. But at the end I asked ChatGPT to actually critique itself and tell me what biases, missing information and misinformation it had put into the machine. And it came back remarkably well and said I haven't given you any bad information, misinformation, but there are biases in what I looked at. It's again using human input to produce those biases. But you made this bias here and that bias there, and if you did this you could remove that bias. So I went back into what I'd built in the way of the prompting system and asked it to give me what I would describe as counterarguments. So, yes, you've told me that this is great, but "what happens if it's not great is a way of overcoming bias, isn't it? So I did that. And then, remarkably, in the last week, I came across Claude 2, which is fantastic, and I was able to take my 60,000-word report, bang it into Claude 2 in about 30 seconds and ask Claude 2 to do the same thing. So now you have a robot which was not trained on the same data that ChatGPT did, again, looking at the process. It came back and said there are some things missing in your report that you should be adding in - not necessarily bias things, but you missing this whole section. You should be putting more stuff in about what customers think, which I've always been heavily focused on, and here's how you might do that. It then came back and said I looked at ChatGPT's report and here are some biases I spotted in it and here are some things that you might want to do to cut it back in certain places, and here's where you might put more graphics in, et cetera, et cetera. So the two robots have been working very, very well at improving the report and as I've done that, I asked them both to rank what they thought the score would be. I saw the score rising as I fixed all of the issues that they came out with. Then, of course, I've got a process that, when we put this live, I'll be asking people like you to say well, you read the report, did you think it was biased? So now you've got humans validating the process, which we've always had at Shaping Tomorrow, but we've got humans validating the process and we've got robots validating each other and of course, I'm validating it by saying do I think this is reasonable? Is it telling me something I don't know, that I don't believe, or is it telling me something which is believable because I've seen it before? So we can produce a battery of ways to validate data.

Mike Jackson:

The danger is most people won't understand that and they will just well. I'll give you an example, without naming names. I did a presentation last week on Foresight and I was explaining what Foresight is and how you do it and explaining how complicated it is and it needs a lot of batteries of methods to be able to get a really good answer. And somebody sent me an email that said I just did that. I did one prompt. I asked it what the future of water security was. It came back and gave me the answer. I asked it to build a plan and it built the plan in 30 seconds.

Mike Jackson:

What do you think of that? I said I think it's very dangerous to use one prompt, thinking that you're going to get all the answers about water security from one prompt. That's the danger of Chat GPT. So I explained it to him. He saw my presentation and he said I understand why I did it wrong, but I'm not going to tell my boss. I said why not? He said well, if I tell my boss I got it wrong, I'll get fired. So I'm going to continue with the plan that I produced. Now, that's really the danger of allowing somebody that doesn't know what they're doing to use Chat GPT for something which is very, very valid, but they don't understand that what they're doing is actually biasing their answer and then not having the courage to admit that it was wrong.

Karen Plum:

I guess, as with many things, these robots are tools, and how we use them is up to us. In terms of bias, we humans are spectacularly bad at being able to spot our own biases, which is why it's always good to work with other people who might be able to point them out to us. I talked to AWA's Director of Consulting, Brad Taylor, about using a tool like ChatGPT to do performance appraisals, something people I'm sure will be starting to wonder about. The topic of bias came up again.

Brad Taylor:

I think people wouldn't want necessarily a performance appraisal that's been put together by AI, because it would just totally lack that personal touch. Where I think AI could potentially play a part is removing the bias element that we have as humans. Typically, when any manager starts to do an appraisal, their minds will go over to what they've experienced over the last few weeks, maybe months, if they're really good. But remembering what happened a year ago or six months ago doesn't tend to leap to their minds, whereas AI could play a part in helping people to achieve a more balanced consideration of someone's performance as prompts and guide rails, but rather than actually being the deciding determinant of an appraisal grade.

Karen Plum:

I think Brad's right and bias is a concern when people are working away from the office, as we know that people tend to have better recall of things that happened most recently, as opposed to over a period of the performance appraisal, maybe 6 or 12 months. Coming back to the use of ChatGPT and other tools, I tend to feel that they provide another component that you can use to get a better end result. It's always easier for me to edit a first draft of something prepared by somebody else rather than starting from scratch. I asked AWA's Founder and Managing Director, Andrew Mawson, if this is his experience.

Andrew Mawson:

Large learning models can be quite helpful in getting you started on documents or giving you something to work with, but in the end I think it's down to human beings to read, discern and then add their unique piece on. A lot depends on what you're writing. If you're writing something which is a piece of marketing text, then some of these tools are probably better at it than some of us, personally. If however, you're trying to write an article for Forbes, you want to write something that you believe in and that you feel comfortable with. You want to write it in your style.

Andrew Mawson:

I think certainly when I write for Forbes, I want it to be a little bit edgy, a little bit controversial, a bit more chatty. You know I want it to be discernibly about the way I do it, but I want it to be unique in a sense. You know, I don't think I'll ever use ChatG PT to write a Forbes article, but I can say that there are lots of other places where you can get some way down the road with using an application like that and you can shape it or use it. I think the other thing probably just to say on that is you know, some of the good uses of ChatGPT are around the articulation of scenarios, and that's helpful because you've got a third party called, you know, a computer working out a number of scenarios. Now it's your job, with your colleagues, to then consider those scenarios and then determine the most appropriate actions or options to take. And these are tools. I mean these are tools that we have the opportunity to use and we should treat them in that way.

Karen Plum:

And Mike agrees that these are tools that should be treated as one piece of the puzzle, to be used intelligently and discerningly.

Mike Jackson:

Yes, exactly exactly. Our machine can actually read an organization. I mean, I was quite surprised the first time I tried it, I actually used Shaping Tomorrow, our company. I plugged in Shaping Tomorrow into the model and said tell me what you think, where we've been, where we are and where we're going, and it was amazingly accurate about what it knew about Shaping Tomorrow today, just by giving it a URL. It was amazingly accurate about where we are and it was amazingly accurate about saying what we should do, including giving us some ideas that we never even thought of before.

Mike Jackson:

But, as you say, you've got to be very careful. I think one way I might explain that is if you think about the number of tools that people buy and they come with a set of instructions and you read the instructions and there are some incredible instructions in there, like don't put your fingers in the lawn mower, don't put your electric drill in water, they're not there because they were somebody's decision that put it in there. Someone did that sometime in the past and probably sued the company for the fact that it didn't work in water or in the lawn mower.

Mike Jackson:

Or their fingers were off in the lawn mower. So we humans are not very good at using tools. We think that we can use them and we forget that they don't work in water or our fingers are going to get hurt in the lawn mower. It's exactly the same with AI, and we need to be very careful in their use, and so companies need to be thinking about how to train their people not to introduce their own bias in the process and not to be silly about thinking they can get an answer from one single prompt.

Karen Plum:

Absolutely, and actually it brings up a point that I wanted to discuss with you. There was a recent UK study from the Digital Futures at Work Research Center and they were talking about the skills that people are going to need to make the most of the AI technologies that are going to be available, and they were saying that there's a lack of investment in the training and skill development in that sort of arena to help people to be ready and to be able to make use of those technologies. And I wondered from your research and from your expertise, are some countries better at this than others? Is it the UK that's lagging behind or are they all lagging behind?

Mike Jackson:

No, I think that the UK and the US particularly are leading in that process. It doesn't mean to say that they're good at it, but they're leading and they're trying to do those things. But we have to remember that ChatG PT and GPTs in general have only been around for at most a year and it does take time for training and coaching to catch up when you've got a sudden shock of the system, when something new comes out. But I suspect that we will find that the leading organizations will quickly develop training and coaching programs to help people.

Mike Jackson:

And if you go on YouTube, you can find hundreds and hundreds of videos about what's going on. I use them all the time myself to teach myself how to do things. Somebody introduced me to Claude2 last week. That didn't require much training at all, but I did watch a couple of videos before I started using Claude2 to get a sense of what to do and what not to do and taking a bit of healthy skepticism as I start using these and not just rushing out and saying I'm gonna use this without giving any thought to what are the downsides of doing things. I would recommend that any of your listeners start by taking baby steps and asking themselves at each step does this look okay, is there anything I've missed? Can I do some bias checks, like the ones that we already talked about, before I put something out that's going to actually change people's minds or be dangerous to them?

Karen Plum:

Yeah, it's interesting, I think, as humans, we're always interested in shortcuts, aren't we? We want to get to the answer quickly, and so I see that a danger that people will rush to using things like ChatGPT, and if they are, in the best sense of the word, ignorant about the subject, they could very easily accept what's been said by the chatbot without having that knowledge that brings the question - "Could that be right? So this is all fascinating, but what are organizations to do? How can they get started and figure out how to use AI in their business? I asked Andrew and Brad for their thoughts, and Andrew was emphatic - don't give the task to the IT department, and he suggests that the Chief Workplace Officer might be someone to help the board coordinate their efforts.

Andrew Mawson:

I do think that senior leaders should not be leaving the exploration of this kind of technology to their technology department. They need to be learning much more about what this technology can do, how it can work, and I think that right now they should be looking at their organizations and trying to preempt what these technologies may do in the context of their own business strategy. And they should also look at what the competition may do with t hese technologies in order to maintain their competitiveness. So I think I'd be doing that, and then I think, finally, I'd just make sure I'd take a look at every job and then start thinking about how I prepare for the evolution, because this shouldn't be a kind of a jerk. It should be a smooth evolution really and the danger is that senior leaders don't really take it seriously and, as a consequence, don't prepare properly for the taking advantage of it and make it into a powerful tool.

Brad Taylor:

I think the richness really does come from the intersections as well. An HR function won't necessarily know what the technology is capable of and the potential of technology the IT department may not be able to relate to, perhaps the customer perspective of something, or the people dimensions or the ethos of the organization. So bringing those in and marketing and all the different specialisms together, that's where it gets unleashed, then that you can utilize it in a way that's truly meaningful for the organization.

Andrew Mawson:

You know, we've talked a lot about creating a new role called the Chief Workplace Officer, who sort of sits as a kind of coordinator. Now I could see the Chief Workplace Officer being somebody operating at board level who is coordinating activities to educate the leadership team or community and for them to collectively be working out what this means and what their strategy might be associated with it. So I think it needs to be coordinated and it's a voyage of discovery, I think, for individuals and organizations.

Karen Plum:

I asked Mike if he agrees with Andrew that the task shouldn't be delegated to the IT department.

Mike Jackson:

Absolutely, absolutely. He's quite right. Well, why wouldn't you? This is not an IT problem. This is an opportunity for a whole company or a whole country or a whole region. So to give it to technology people who don't necessarily know all the things that are going on in marketing and operations and sales is really limiting your perspective. So, personally, I think this is a CEO level, board level question. I think it means engaging all the top team and many of the people in the organization, and you said it in what you said up front. It starts with opportunity.

Mike Jackson:

I've always used Six De Bono hats, starting with the green hat, I think it is, which is the opportunity hat. Start with the opportunity hat. Look for things you can do to improve the lot of everyone. Don't start with how can I cut costs. That's not looking at opportunities. How can I improve the lot of my customers, my staff and all the other stakeholders by using AI?

Mike Jackson:

And then gradually you'll get down to the black hat, which I think is the fifth hat, and that's the one that says now I see all these opportunities, but what are the risks and how could I overcome those risks by what I've already seen in the first four hats? To give me my six hat, which is what am I actually going to do about it? So I would never start by looking at the risks. Always start with the innovation and the opportunities, because then you'll see that actually a lot of the black hats disappear and many of the solutions are in what you've already discovered. I think De Bono was quite right -start with the opportunities and then move through the process until you get to the black hats, and then you start saying, well, I've now judged my situation, I see the risks, I see the opportunities. Here's how we can move forward in a very positive way and do that in a way that's good for everybody. Yeah, it's good for everybody, and that is quite possible.

Karen Plum:

Yeah, and I guess that that would be your advice and your plea to organizations is to do it in that way and not to look at it in a fearful way. Look at the opportunity and see how it can benefit all of the different communities and stakeholders that form part of your organization.

Mike Jackson:

One of the models I always had in my head, which somebody taught me years ago. I've even forgotten who it was now, but he described in the organization as having three types of people. H e described 20% of the organization will be adventurers. They will pick up the ball, having been given the idea of artificial intelligence, and they will run with it and score goals. 60% will be the adopters. They will see the 20% try to score the goals. If the management kicks the 20% and berates them for not scoring goals, the 60% will never move. They will sit there and carry on doing what they're doing. So the opportunity lies in trying to get the adventurers to do things. Pick them up when they fall over, encourage them to do things, don't kick them, reward them for trying, etc. Etc. And the 60% will follow.

Mike Jackson:

And then there are 20% who are the abstainers. These people do not want to move at all, supposedly. But I would change that slightly by saying there are two types of abstainers. There are those who don't understand and have never been trained and are sitting there waiting to see what happens because they don't know how to contribute. So that's where training comes in training and contribution.

Mike Jackson:

The other 10% don't want to do anything, they'll sit there and resist in quiet ways and overt ways. And if you do it right and start with the adventurers, what happens is the rest of the staff get on board and help that 10% that do want to learn to get into the adoption stage and maybe be adventurous. And they also have a word with the ones who don't want to get on board at all and quietly suggest they should go and find somewhere else to work, without the CEOs having to make people redundant, and so on and so on. It doesn't always work that way, but it's a pretty good model. So, instead of trying to say to people, you will do this and not even give them any training and expect them to do things they don't know. The answer is to find the champions and then reward those champions, train those champions, get them to really motor and take the rest of the organization with them, and then, as a CEO, it becomes far easier to manage the process and enjoy the success.

Karen Plum:

I love that and actually if you can generate trust in those adventurers, then the whole thing is going to feel really beneficial to the organization.

Mike Jackson:

The role really is like a parent watching somebody drive a bike. So the first time you get on a bike you're going to fall off. When you first learned to drive a bike or get on a bike and fell off, Karen, I doubt your parents kicked you and told you you were an idiot and put you off being on a bike forever. They would have dusted you off, put you back on the bike and helped you to get on and cycle, and we forget that when we're adults. So you are adventuring on the cycle, but you got hurt by your parents because they didn't, because they gave you the wrong message. If they give you the right message, you'll do wheelies one day.

Mike Jackson:

The opportunity for a leader in AI is to find the adventurers or maybe ask the adventurers to volunteer, because that's even better. One volunteer is worth 10 pressed men and encourage them and train them and reward them when they do even small things right. Let people see that they've been rewarded, more people will jump on board and more people will do wheelies, and there are so many organizations start at the other end, which is to shoot the people t hat won't do anything. It's the wrong end of the cargo ship. Need to focus on the front end first and the back end last.

Karen Plum:

What we need is more organizational wheelies! I think that's probably going to be the title of this episode!

Mike Jackson:

Doing organizational wheelies - brilliant!

Karen Plum:

And there you have it. Striving for excellence or wheelies in our organizations is where we should be heading, not ignoring or banning or centralizing the use of AI, robots and apps. I hope you've enjoyed our exploration of artificial intelligence and that it's given you some things to think about. I'd like to thank my guests, Dr Mike Jackson, Andrew Mawson and Brad Taylor. It was so interesting hearing their views and talking about the future. If you'd like to talk to Andrew or Brad about what this could mean for your business, their details are in our show notes or head to the website advanced-workplace. com. You'll also find some links in our show notes so you can explore the topic a bit more. I hope you'll find those helpful. If you'd like to hear future episodes of the DNA of work, just follow or like the show. You can contact us on our website, advanced-workplace. com. Thank you so much for listening. See you next time. Goodbye.

Understanding and Misconceptions About Artificial Intelligence
Using AI Tools
Collaboration and Leadership in Embracing AI
De Bono's Six Hats