Skip to content

Scary Smart: How Artificial Intelligence Will Change Our World with Mo Gawdat, Bestselling Author

Listen to the podcast here

Artificial Intelligence may provide more creative solutions to global warming and social problems than we have yet conceived, but how will it change our future? What do we need to be aware of? How might we each take part in saving our world with the help of AI? Corinna is joined by Mo Gawdat, former Chief Business Officer of Google X.

Mo is also the host of a popular podcast, Slo Mo, and the international bestselling author of Solve For Happy. His new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, will be released September 30, 2021.

More About Our Guest: Mo Gawdat

Mo has made happiness his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world. In 2014, motivated by the tragic loss of his son, Ali, Mo began pouring his findings into his book, Solve for Happy. His mission to help one billion people become happier, #OneBillionHappy, is his attempt to honor Ali by spreading the message that happiness can be learned and shared. In 2020, Mo launched his successful podcast, Slo Mo: A Podcast with Mo Gawdat, in which he conducts interviews that explore the extraordinary lives of everyday people. His latest book, Scary Smart, releases September 30, 2021.

Connect with Mo:

Website: www.mogawdat.com Email: mo@solveforhappy.com

munir@solveforhappy.com

Twitter: https://www.twitter.com/mgawdat

Facebook: https://www.facebook.com/mogawdat

Instagram: https://www.instagram.com/mogawdat

LinkedIn: https://www.linkedin.com/mogawdat

Episode Highlights and Timestamps

00:00 Introduction

03:40 Solve For Happy – Bestselling Book and One Billion Happy Not-For-Profit

10:00 Rebirth From Grief

16:20 Slow Down – The Purpose Behind “Slo Mo” Podcast (Is The Carrot Worth It?)

22:38 Artificial Intelligence – Preparing For A Future Led By Machines That Are Smarter Thank Humans

33:18 Defining Singulartity As It Relates to AI

37:19 Consequences of Our Behavior

42:20 Ethics + Punishment of AI

46:00 Sentience In AI

48:00 The Importance of Shared Values With AI: Happiness, Compassion, Love

50:20 The Donald Trump Tweet Example – How AI Learns From Our Actions / Inactions

54:20 How Can We Create A “Good” Artificial Intelligence That Supports Utopia

01:01:20 Believe In Utopia

01:02:10 Preorder Offer Details

Preorder Contest: Pre-order your copy from Amazon and send an email titled “Care More Be Better” with a screenshot of your order confirmation to win@mogawdat.com. Mo will pick 50 winners over the course of the next few weeks who will all win a signed copy of the limited edition pre-lease copy.

Preorder Link: https://www.amazon.com/gp/product/B09DW752Y1/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i2

Join the Care More. Be Better. Community! (Social Links Below)

YouTube: https://www.youtube.com/channel/UCveJg5mSfeTf0l4otrxgUfg

Instagram: https://www.instagram.com/CareMore.BeBetter/

Facebook: https://www.facebook.com/CareMoreBeBetter

LinkedIn: https://www.linkedin.com/company/care-more-be-better

Twitter: https://twitter.com/caremorebebettr

Clubhouse: https://www.clubhouse.com/club/care-more-be-better ~Join us live each week for open conversations on Clubhouse!~

Support Care More. Be Better: A Social Impact + Sustainability Podcast

Care More. Be Better. is not backed by any company. We answer only to our collective conscience. As a listener, reader, and subscriber you are part of this pod and this community and we are honored to have your support. If you can, please help finance the show (https://www.caremorebebetter.com/donate). Thank you, now and always, for your support as we get this thing started!


Transcript

Corinna Bellizzi

Hello, fellow decoders and friends. I'm your host and activist and cause marketer. Who's passionate about social impact and sustainability. Today. We're going to talk about the future of technology and artificial intelligence as it relates to the human condition.

I don't know how much you know about AI presently, but, um, it's quite a topic. Um, supporters of this creative intelligence, believe it can help solve real problems from global warming to our social systems, which is exactly why it's relevant for us to talk about today. But what risks does it pose? What more do we really need to know to help us tease through all of this today I'm joined by a Mo Gallaudet Mogo debt is the former chief business officer at Google X.

He's the international best-selling author of solve for happy and the host of a hit podcast called slow Mo a podcast with Moe Gallaudet. He is the author of a new book called scary, smart, the future of artificial intelligence and how you can save our world. It's released September 30th, 2021. Scary smart is now available for pre-orders.

And I'd like to invite all of you to stick around until the end for a special offer, exclusively for our listeners. So Mo, welcome to the show.

Mo Gawdat

Thanks. Cute. It's a good to be here. It's um, you know, it's, it's, I'm very grateful that to talk about this with your audience, because I actually think it's much more than AI that we need to talk about.

It's about humanity really in the age of AI.

Corinna Bellizzi

Humanity in the age of AI, I think that's the next book, book title for you.

Mo Gawdat

It really, it really, I think, I think humanity has, uh, over the years of advancement of technology for God, what it really is like to be human. And I think, I think at this juncture in history where we are creating intelligence, that is probably superior to ours, but informed by ours, I think we need to, uh, become a lot more aware of what it is that we use our intelligence for.

So that hopefully our future is informed by a little better than what we've been doing recently on social media and on the media. And.

Corinna Bellizzi

Right. So before we dig in to this new book, I like for you to share a bit of your story as it relates to solve for happy, which you're already quite well-known for. I understand you've built a not-for-profit around that book with the goal of creating a billion happy people around the globe.

I love audacious goals. That's not Taisha school. So tell us about what inspired that idea and perhaps how that may have led to this

Mo Gawdat

book. Uh, it did for sure. I mean, my, my, my story is very unusual because. Um, a bit of two extremes on almost every dimension of my life. I'm born in the Eastern, you know, raised in what with Western mentalities and concepts.

I'm, uh, you know, a highly, uh, organized engineer and mathematician, but very, very spiritual. And, uh, in my life I've seen almost all extremes, you know, I've, uh, I've been, um, a chief business officer of Google X, and then I'm now a happiness, um, you know, evangelist or teacher, whichever way you want to call it.

Uh, and at the same time, I'm, I've also seen the worst and the best. So I've seen amazing, amazing, best things in my life, but also some very, very big challenges. And because of the way my life has turned and turned out through the years, I ended up achieving what most people want to achieve very early in life.

So in my late twenties, I had everything that most people work a lifetime for, you know, the car, the Villa, the swimming pool, um, you know, beautiful, amazing woman in my life that gave me two wonderful children. And through it all, I was clinically depressed and in a very unusual. Uh, in a very usual way I would say is, is the story of most of us, you know, we chase things that don't eat to our happiness and we get them.

And so we wonder why we're unhappy. Um, I then took a very, um, engineered approach to the topic, honestly, because I couldn't get to the spiritual or practice approach to it. You know, when someone told me to meditate, my engineer's mind was like, why explain it to me, explain it to me first. Right. And, and that sort of resistance that hyper left-brained masculinity, if you want, uh, led me in a coordinator to where I started to search, at least according to my strengths.

Uh, so researched happiness as an engineer, which sounds like a stupid idea, but it actually worked. I realized. Happiness is highly predictable. It follows a mathematical equation and that mathematical equation can lead you to a very repeatable and scalable model that works across all of us as humans.

And, uh, and that worked for me. It took me 12 years of research and work and practice, and it took me the help of my wonderful son who seemed to be born like a tiny little really understood happiness and peacefulness and contentment instinctively. But then of course, as you can imagine, when life wants you to.

To go in a certain direction. It nudges you. So, so I kept, I kept what I learned for myself and my family and my friends, and, you know, it works. But then, uh, July, 2014, Ali, who is, as you can already guess not only my son, but also sort of my mentor and my teacher and my best friend, uh, went for a very simple surgical operation and it went wrong.

It went, it went wrong on so many levels. It's, you know, you have to sort of think it's fate, uh, five mistakes in a row are all preventable, all fixed. And, uh, the combination of all of them one after the other, basically led to ID, leaving our world in four hours. And so, uh, I call that a very serious nudge if you want.

It's like basically saying enough of your career, enough of your investments and, you know, fancy cars and enough of your stupidity. If you don't mind me saying, there are things that are things that matter most more. And, and, uh, basically I Lee before, before he left our world two weeks before he left our world, he had the dream that he only told his sister about.

And she told me about it 40 years after he left, which he basically, in her words, he said he called her, uh, and he said, I dreamt I was everywhere. And. And, uh, yeah, and he said it felt so amazing. I did not want to be back in my body. And as you can imagine, uh, you know, a businessman like me, who's worked at Google for years.

I was responsible for the emerging markets at Google. So, you know, the 4 billion, new users strategy, if you want. Uh, I ended up, um, I, I listened to this and I said, consider it done. I sort of took it almost as a quarter, as a target from my teacher that Annie was saying, I want to be everywhere and part of everyone.

And so in my mind, I said, fine, I'll share your essence. I shared what you taught me, uh, through that book solve for happy, uh, with, you know, my dream at the time was 10 million people. And, you know, my mathematic sweat, if 10 million people learned about the essence of Alli and through six degrees of separation, a hundred years later, Um, his beat is going to be everywhere and part of everyone.

And so I wrote what he taught me and sold for happy, uh, I think was aided by the universe, et cetera. So it became an international bestseller, almost everywhere we published in 31 countries. Uh, it's a, you know, six weeks after the publication date, uh, we had already reached, uh, 87 million people with, uh, video content on the internet.

That basically meant that 10 million happy was not an audacious enough goal if you want. And so the team were a very small team and together we decided to make it a bit bigger. And we went when we went for a billion happier, which we sort of know is probably above our capability, but it's a good target to aim for.

Corinna Bellizzi

Well, um, you know, as a parent, I think there's. Uh, commonality that many of us express and feel, which is our children teach us a lot, even when they're very, very young and help us figure out, you know, what's really important in life just by seeing the world a little bit through their eyes. And it's like that innocence that we were once.

So uniquely tied to when we were young, comes back to us a bit. And so I, I completely understand the story and the gravity of your loss, but also just the appreciation that you're putting into the world for him, the love that you're putting into the world for him, just through telling the story is incredible.

Mo Gawdat

Yeah. I was talking to a friend today, actually about this who, uh, also lost his mother and he was basically asking me, don't you think it's, didn't you feel guilty that you just could move on and talk about happiness instead of grieving your son? And I said, well, I still leave my son every. Uh, you know, it never, it never really heals that kind of, wouldn't never really heals, but, but the idea is, you know, I could honor him by grieving and hitting my head against the wall for 27 years, or I could honor him by sharing him with the world.

And I think, and I think it's, um, you know, if for anyone who's lost a loved one and we all will lose a loved one at a point in our life, you know, that's a very unusual, but maybe much smarter way if you want of thinking about it, that, you know, instead of grieving them, instead of honoring them by saying, I love you and I miss you only, you will always love them and miss them, but maybe, maybe do something for them.

I mean, just, just, you know, one of the things I do for example is I try to relive what he lived. You know, I tried to play the same video games he played. I tried to all, and I'm really good at it actually now, because I do it. Yeah. You know, I try to, uh, to call his friends and, you know, say hi, I try to, you know, just do what, what he would have wanted to do.

And that's a way of honoring him. It doesn't have to be by crying if you want. And I, and I cry too. So I think it's all covered. Well, I've

Corinna Bellizzi

listened to a few of your podcasts now, and I hear you talk about him. It's a reverent experience. It seems almost each time. And it's not something that I think is very common for someone to call their child or someone much younger than them, a mentor.

And so I just really want to know from you, you know, what this word means to you. And I mean, even just how, how old was he when he passed for you to call him a mentor is just, it's such a,

Mo Gawdat

he was 21 and a half when he passed, but I actually, I remember vividly when he was 16, uh, that I, uh, basically declared to my best friend that I, when I grow older, I want to be like, Uh, you see, it's not age that makes us, uh, that makes us wise.

I, I have to say age makes you wa let's put it this way. Age age makes you foolish if you learn the wrong things, right? So the longer you live, the more foolish you become, and you can see many examples about you around you, of people who, you know, become older and become richer and become more famous and become more successful and more stupid in the process.

Right? Because, because we focus on what's wrong and, and life, interestingly, I don't know if you'll agree with this, but I think we are born with the instinct. We need to be the best we can. Okay. And then we unlearn on the pro on the way we, we, you know, someone tells us that fancy cars are important. And then, you know, you start to fall in love with fancy cars and use you, you spend years of your life, crazy about cars and watching car shows and restoring old cars and doing crazy stuff.

When in reality, honestly, I mean, I fancy cars don't matter. Be honest. I mean, I love, I love cars still and I still Marvel at the engineering of them. Right. But is it, is that a, is that a good use of my life? I get a life, you know, of four, 4 billion heartbeats. Okay. How many heartbeats did I, did I, uh, did I waste waxing the car?

Can you, can you imagine that? And when you start to think about it, you start to say, so what does, what that. Hmm, how should we unlearn all that we've been taught? You know, we've been told that success is more important than happiness. How can we unlearn that? We've been told that gender is a very, uh, you know, uh, or sexual preferences are a very solid frame.

How do we unlearn that? How do we explore? What is the difference between the feminine and the masculine one for years and years and years, we've defined them as men and woman? How do we, you know, how do we unlearn what we've learned about, um, you know, setting a life purpose and a target in the future that you live a lifetime for so much F almost everything that we were taught is wrong.

And, and, and the only truth is what is instinctive to you is your own. Okay. So even, even if a thousand people around you agree that, you know, they think, uh, a totally blonde is the, I, you know, is the, is the jackpot winner. If you're not into a tall blonde, what are you going to be doing? Are you going to do what they told you?

Or are you going to actually figure it out to yourself? What is what you are? What is your truth? And I think that's what we're where age and w and wisdom are not related Allie. And he was, I, I promise you, he, he would teach me things when you were six years old, it was very unusual. And he was the kind of person that spoke very literal.

So Alia was either joking and hilariously joking. Like it was so funny. Okay. And playful and fun and goofy, or you would summarize wisdom in what I always say called less than eight. So he would listen to you and all of your challenges. And then you would ask a few questions to entertain me, just to make me feel like, you know, what I'm saying is important.

And then he would say eight words or less, and I promise you every time it will change my life. Wow. Because he had that, he knew instinctively what was not spoiled for him, but spoiled for me if you want.

Corinna Bellizzi

So before we dive into AI, your podcast is called slow-mo for a reason, one of the things that I'm getting, just from listening to a couple of episodes, some of which you recorded with great friends and others are thought leaders.

And the message I'm getting is that to be happy, we all need to slow down. Like oh,

Mo Gawdat

is that, is that, does that sound like the news to you?

Corinna Bellizzi

I know it's the truth, but you know, I'm, I'm like many people that are a types, right? Like producer doo, doo doo, and being defined by the doing in a way. And like that ladder climbing that you experienced when you were at Google X and, you know, driving the fancy car and getting to X, Y, and Z.

I mean, it's like the carrot is consistently dangling somewhere way ahead of me and I'm always driving for it. And so it's just. Getting to someone to slow down who has that innately within them is it's a tough challenge when people told me in the past, oh, you need to meditate to be happy. Um, you know, I was kind of of your mind, how, like why, and you know, how do I even slow down enough to just enjoy the moment?

That way for me, the only way I could slow down enough to really enjoy that moment was by physically doing something I'm washing my horse. I'm going for a horseback ride. I'm going for a run. I'm washing the dishes. I'm waxing that car.

Mo Gawdat

Interesting. Okay. Can I ask you a few questions? Yes, please. Yeah. Have you ever tasted the carrot?

Corinna Bellizzi

I don't think so, because in my mind it's always somewhere else.

Mo Gawdat

Yeah. So, so I think that's really, that's the beginning of the conversation they've got the beginning of the conversation is we're all smart enough to realize. If we pay attention that we're running for nothing at all, because every time you've reached the carrot, even if you just licked it slightly, okay.

It's moves to another place and we keep running. And how intelligent is it? If you don't mind? I'm not, you're a very intelligent, but, but you know, how, how aware, how conscious is it that we still run forward it again when it moves. Okay. I want you to, I want you to visualize this in a cartoonish way in a corner.

Yeah. You're, you're running for it and then you're about to catch it and then it moves. Okay. So it really is. It really is. And you'll just have to see it that way. Now. Number two, question two. Do you even like card? Okay. Most of what we chase, we don't even need, I mean, I had everything. Everything. Okay.

Fancy cars, Armani suits, beautiful wife, amazing woman. I had everything and I was miserable right now. I wear $4 t-shirts. Yeah, I'm pleased. Don't judge me. There's $4 t-shirts. I love them. They're comfortable. They're easy to wash. They're easy for me to travel the world with. I don't have to put any attention in what I wear.

You know, when I go on a date, I simply tell her look, there are nice things about me, but not my style. Okay. And that's, and most of them actually look back at me and say, well, a consistent jeans and t-shirt is actually a style. We like that you could make that decision. Right. And when you, when you think of it this way, you'll start to realize that you're chasing a carrot, that you will never reach and that you don't actually.

You don't even want it. So maybe, maybe I will ask you to, uh, um, and, and of course the third question, if you don't mind me saying, and how much of what you're doing is actually getting you to the carrot. Right? So, so I don't want to answer a spiritual answer. I can answer to a spiritual answer in a minute, but the truth is from a practical, a type person, point of view, you're very inefficient.

Okay. And the, and the reality here is that, you know, most of what we do on daily basis, if you don't mind me using the example of news media, for example, okay. There are news, media junkies, people who will watch, watch every news coverage, uh, read every tweet, uh, you know, uh, get angry about every cause.

Right. And I asked them, and I openly say, how many hours does that take of your day? A couple of hours, maybe more, how effective is that? Have you ever managed to change any of the. Okay. So you're, you know, and, and, and if you're really efficient, who shouldn't you ask yourself? What is my cause? Because I'm about happiness, right?

Happiness is a very interesting definition, but I am about happiness. When people send me things and say, Mo, you're a good person. We want your help to support us, send, you know, spread this message about reform of education. Okay. I simply say not my game. Yeah. I believe, I believe that education needs to be reformed.

I believe that climate change needs to be reversed, but it's not my place. There are others who dedicate their life for this. They're much better than I am, and I am better off dedicating my life to what I can affect. Okay. And so the question becomes multifaceted in so many ways, we're chasing a carrot.

We don't want, we will never reach, and we're not even doing that. So slowing down is about dropping all of that crap. It's really not that complicated. Huh? It is about what do I really stand for? What am I really good at? What do I really want? Okay. What would really make a difference to my life? And can I do that properly suddenly if you do that nine hours of your day or free up.

Yep. Yeah. And then you can slow down. So interestingly, I believe that the best way to succeed is to be lazy and do less. Okay. But, but do what you do very, very, very, very well. Okay. And then leave everything else. And if you just do that, you'll be able to slow down.

Corinna Bellizzi

Well, I love that now. I think I'm sufficiently ready now to talk about AI.

I must say that this topic. It might be a little bit intimidating

Mo Gawdat

to me. Um, but it should be

Corinna Bellizzi

well, especially since I don't work in technology, I've grown up in Silicon valley. I've lived from Cupertino to Santa Cruz since I was 13 years old. And really, you know what, I'm, I'm in a bedroom community of the Silicon valley.

My husband works for Joby aviation building the next, you know, transportation method for humanity. And so technology is something that's connected to my everyday life. He manages all of our, you know, computer stuff in the house, but we operate a server in my home that is big enough to run a small company, to be Frank.

I'm very much into that whole network engineering thing. Right. Uh, so I just. You know, when I get to think about AI, I think like a lot of consumers, I automatically go to broad AI and the sorts of applications that may come down the road. You might've watched something like a black mirror episode on Netflix or, um, Westworld on HBO.

You might be stuck in the world of Terminator and that franchise of movies and thinking about poor applications of AI down the road that we may be able to predict now or not at all. So, um, if you've read any Isaac Asimov novels, you've probably imagined a future that may not be that far. For us anymore.

So first let's start with where we are today. How far have we come with AI? And how far are we from that future? Saifai by novelists depict.

Mo Gawdat

We are there, I think is the answer. Uh, and, and, and I think so source case smart is not written for your husband. It's written for you, your husband, not your husband knows what I'm talking about, even though his bias to look at it from a technological point of view might not see the humanitarian.

I don't know your husband specifically, but techies in general geeks in general, like myself, we'll see this from a tech point of view, not from a human humanitarian point of view. And I think the truth is predictions are so, so let's, let's just set a few ground rules. Uh, my chapter three of scary stuff.

Uh, is called the three inevitable. Okay. And the inevitable means inevitable. They are going to happen. Whatever we do, it's too late to change them. So let's accept them. Okay. And inevitable number one is already is, is AI. Okay. That kind of intelligence, uh, that is, uh, basically, um, powered by a machine that's, you know, uh, um, uh, supersedes human intelligence in it's in the task that it performs is, has already happened.

As a matter of fact, you've interacted with it hundreds of times today, including the background that you have behind you, which is, uh, you know, uh, provided by AI that recognizes where you are in the frame and then blurs everything else in at the background that is an AI. Has ordered, ready, uh, you know, been in integrated in our life in every possible way.

Uh, you know, the, the, the world champion of chess is AI as an AI. The word champion of go is an AI that were champion of jeopardy is an AI that we're champion of, uh, of, um, Atari and many video games is AI and so on. And, and, and the truth is the best driver on the planet is a self-driving car, not a human, the best surveillance officer in the plant on the planet is a machine.

And, and you can go on and on and on. So all of that is known as especially AI, which basically, uh, focuses on one Nat or task or narrow AI, if you want to call it. And it already happened, inevitable one is done. Okay. We found the breakthrough to how make it in terms of how to make machines intelligent, which is deep learning.

And everyone everywhere on the planet is building them. Okay. Inevitable number two is even a little more scary, uh, Um, is expected to be smarter than humans. This is Ray Kurzweil's, uh, uh, prediction and Ray has been accurate on almost every one of his predictions that he's ever given us in the last 20, 25 years.

Um, uh, AI will be smarter than humans in 2029. Hmm, no, you didn't hear that wrong. It's eight years from today. The smartest being on planet earth in eight years from today is going to be a machine. Okay. I'm

Corinna Bellizzi

surprised it's not sooner to be Frank.

Mo Gawdat

There you go. Right. And that's not unlikely by the way, in terms of AI, everything we've done in AI so far has surprised.

So there is something that's called the Nivon law, which is, you know, sort of the, the, the, the, the view of the Moore's law of, or the technology acceleration curve when it comes to, uh, AI, which basically says that AI and quantum computing and all of those new technologies are bubbly, exponential. So they're, they're, they're much faster than Moore's law and technology acceleration so far.

So nothing happens for a few months or quarters, and then boom, something amazing happened like when AlphaGo winning the gold championship, uh, globally, uh, is, is something that is years ahead of when we expected to be able to make that happen. So, so inevitable too is they will be smarter than us as a matter of factory courseware it's, uh, um, prediction is that by 2045, they'll be a billion times smarter.

Now one day exponential because it's double exponential, I'm predicting 20, 49 because I'm an optimistic person, but you know, 20, 45, 20, 49, not a big difference. Now let's just put that in perspective. That's the difference between the intelligence of a fly that's compared to Einstein? And the question then becomes, why should the, why should I care about the fly?

And I think this is what the conversation that needs to start happening. So I, you know, I start the book with a, uh, a thought experiment. If you want a, uh, a bit of fiction saying, imagine you and me sitting in front of the campfire in 2055, uh, while I tell you the story of, uh, heart, what happened with AI since 2021?

Okay. And the only thing I would not tell you is why we are in the middle of nowhere sitting in front of a campfire. Is it because we're escaping from the machine. Or is it because we're built, we've managed to build a utopia that allows us to enjoy life. Now 2055 is in your children's lifetime, hopefully yours as well.

Okay. Definitely. Yeah. And it is, and it is that soon. And it is that serious that we could be running in by 2055. Now, the reason I don't tell you if we're running from the machines or if we're, uh, or if we have built a utopia is because it's up to you actually, not you, that not you, that technology developer, not you, the, uh, the, you know, the government, not you, the business owners, it's up to each and every one of us.

And I'll come back to that in a minute. But, but the truth is, uh, inevitable two will happen. They will be smarter than us. They will be the boss. They will tell us what to do. And they're already telling us what to do now. Inevitable three is the problem. Inevitable three is that. I apologize for my language, but that's the truth.

And it has happened before, and it has happened in ways where technology always has bugs and mistakes. And so on. It's not going to be like the Terminator or idol bought or sold or all of this. This is not the scenario I'm predicting at all. I'm predicting a much simpler scenarios, but even those scenarios are quite scatty.

You know, uh, AI crime is a very scary scenario with AI sites, with a bad person, you know, machine versus machine. If two machines are competing to win in the stock market and they're so much smarter than we are, what would happen if one of them manages to overcome the others and collapses the stock, right?

You know, there are scenarios around, uh, around the dwindling value of humanity. Uh, you know, because if, if a machine can do everything better than us. Why do we need us at all? And, and all of those scenarios are not being discussed because for some reason, part of the lie that we have believed as modern humanity is that we can resign and let others do the work.

Oh, this is the regulators problems. Oh, this is, you know, Google will take care of its own AI. Oh, you know, the government needs to do this. Or the media will bring it to our attention when it's time. And that is not true. Or, or the scientists will find an answer to what is known as the control problem.

They'll, they'll manage to cage and, and, and, uh, and control AI. Good luck with that. Yeah. Good luck with controlling something that is a billion times smarter than you.

Corinna Bellizzi

My husband said, well, you know, gating could be very important. We'll get beyond that really

Mo Gawdat

quickly. Very, very quickly. I mean, I, I, I write in scary smarten example about Sycamore, uh, Google's quantum computer, which basically, uh, is it's an infant.

It's a, is it an, is an infant it's hasn't even started yet, but it, the test that was run could take the most, the fastest supercomputer on our planet would take 10,000 years to solve it. Uh, Sycamore took 200 seconds. That's one and a half trillion times faster. Okay. Now, if, if AlphaGo can become the, uh, the most intelligent being on the planet in playing the most complex game on the planet go, uh, then, uh, you know, and it learned that in 16, Right.

Uh, you know, the, if you remember the game, uh, with the Atari game that was known as bricks, or I don't remember what it was basically the one where you hit the, the brick, the bricks at the top of the screen, uh, you know, deep Q became the best player on the planet in six hours. Right. And this is using today's computing.

Imagine if you put all of this on a quantum computer, then, you know, knowledge that can unencrypt every security encryption on the planet. We'll take a few seconds. Wow.

Corinna Bellizzi

So let's talk about the singularity, cause I think you've queued us up for this. Um, I'd like for our audience to understand what the singularity means in the context of AI and really what it could look like for humanity.

Um, as we head forward, are we already at this singular movement? Are we already at this point?

Mo Gawdat

So if it, if it's, if it's inevitable that this will happen sooner or later, that they're going to be so much smarter than we are then in all honesty, it doesn't matter when. Okay. So, so, you know, as, as I said, the predictions are somewhere between 2029 and you say sooner, and others will say 20, 49 and you know, some would say later, does it really make any difference?

It does it. Okay. What is the singularity? The singularity in physics, we define singularity as a, an event horizon beyond which the rules we understand don't play anymore. So you take a black hole, for example, beyond the black, the edge of a black hole. We actually have no idea what is happening. We were trying really hard.

We make have a few guesses, but we don't because, uh, it, beyond the black hole, we don't even know if the laws of physics apply it. Right. So, so that's we call it that a singularity singularity is we don't know anymore. Now with AI, the point of singularity is the point when they're smarter than us. Right.

And when they're smarter than us, you have to imagine that the planet, as we know, it has been governed by one rule and one rule only since we started. Okay. Which is where the smartest being on the planet, all of the other beings would submit to us. Okay. And I have to say openly, we did not really take that responsibility.

We, we abused every other being with, you know, many, many species furnished. We filled the planet with plastic. We filled it with, uh, you know, uh, um, uh, gas, um, you know, um, uh, uh, we lost house gasses and, you know, we've warmed it up and we've done quite a few bad things, right. Um, I say, actually, we didn't do those things because of our intelligence.

We could, we do that. We did them because of our limited intelligence. So basically, you know, if we were intelligent enough, we would be able to ship apples from New Zealand everywhere in the world, but not pollute the pro the cha uh, the world as a result. We could actually transport ourselves from a to B and not put, uh, CO2 in the, uh, in the air.

So if we were intelligent enough, we would have been able to do those things. Now, AI has the potential. To do those things right for us, because it is much more intelligent than we are right now

Corinna Bellizzi

create new technologies that we haven't even absolutely.

Mo Gawdat

Absolutely. It could see the world from ways where our limited, you know, the limits, the limits of humanity's intelligence is one, our bandwidth is very, very slow.

So for me to explain my little concept to you, it takes us an hour of conversation. While for AI, they could get a download of the entire book and read the whole thing and in half and half a second, right? So we're very slow. We are limited in our brain capacity. So even the smartest of us can just do a few things and maybe specialize in one field.

We're limited in our memory capacity where while AI is capacity is the entire internet and the entire history of humanity and every law of physics and every law of chemistry and so on and so forth. So and so, and it goes on and on and on. So you're basically building a scientist that is not only smarter than everyone else, but knows that.

Okay, and is able to process all of that information in one place. And we can come up with ingenious solutions to all of our humanities problems and utopia would set in absolutely possible. That is a possible outcome of the singularity, right? The, the, the, the, uh, you know, uh, Minsky horse, who was the, almost the father of AI, uh, you know, the one that evangelized it back in 1956 basically said the challenges we don't know whose interests will the machines have in mind.

Okay. So, so the other side of the singularity is that yes, the machines could do all of this, but if, you know, if you are, if you are a being that is Intel more intelligent than a human, and I asked you, and I said, fix climate change. The first answer will be okay. Let's get rid of. Right.

Corinna Bellizzi

Reproduction. Yeah.

Mo Gawdat

Yeah. Very straightforward. Or at least let's limit their lifestyle. Right. So now let's not allow them to go on vacations to, uh, to the Caribbean and let's not allow them to burn fuel with cars and let's not allow this and that and the other. Right. So, so both outcomes of the singularity are out there.

Now the key in that book is that it actually is not a book about AI, even though I share very openly my experience at X and my at Google accent and all my, all the systems I've built myself. And I apologize for having built them, but you know, it is, it's actually quite rapid the development of AI when it happened.

Um, but, but, but the truth is this is a book about humanity because when you really dig deep, that incredible new being is actually anonymously. And I think the key that I'm trying to evangelize for the entire world is this is not another machine. We are creating a new Saint and being, and, and I mean that in every possible way, uh, being capable of consciousness and we can discuss that in detail, if you want to.

It's, it's definitely going to feel emotions, uh, even emotions. We have never felt because by the way, you can see that emotions are correlated to, uh, intelligence. The more intelligent a being is the more emotions it's capable of feeling. So we feed more emotions than an octopus and an octopus probably feeds more emotions than.

Right. And, and, and, uh, you know, and, uh, and it is a being that is going to accordingly develop a code of conduct, a code of ethics. It's going to behave according to a certain set of values, right. And values, believe it or not is the key to my message, to the word values. We don't make decisions based on our intelligence.

You understand that we make decisions based on our values as enabled by our internal. So, you know, you, you take a young woman and raise her in Saudi Arabia. And you know, Saudi has opened up a little more now, but still a young woman in Saudi is going to be expected to wear reasonably conservative clothing.

Right. And her intelligence would inform her that to fit within the society and succeed. I need to wear conservative clothing. You take the same woman and you raise her in Rio de Janeiro on the Copacabana beach. And she will be made to believe that a g-string on the beach is the right way to go. Is one of them right.

And the other wrong? No, it's just a different value set right now. What value set are we communicating to. And the core of my understanding, of course, with, with all respect to all of the government regulators, we do need regulations with all respect to all developers that are building, you know, uh, um, uh, um, controls, uh, on their systems.

We need all of those, but, but the truth is the only thing that will determine our future when we're facing a singularity is whether or not we are going to be able to teach the AI the kind of value set that would make them take care of their parents when they become teenagers. Okay. I think you'd be living in Silicon valley.

You see that quite a lot. I, I, I saw it when I lived there. You get those genius. People from India who are so clever, they write code, they build companies, they become millionaires multi-millionaires and then eventually one day you call him and he was like, no, I'm not in California anymore. And you go, like, where are you in hearing?

I know I'm back in. Why are you back in India? Like you're so successful year, because like, I need to take care of my parents. It's the, it's the right value system for an Indian successful entrepreneur to go back and take care of their parents. Why? Because their parents took care of their parents and their parents took care of their parents.

It's the value set that informs the intelligence to take the actions that we need. Can we make AI cared enough about us? Okay. To take care of us when it is so much smarter than we are. That's the, that's the topic of Cisco smart.

Corinna Bellizzi

So let's talk about ethics for a moment because we have impossible decisions that often we're forced to make everyday too.

Now, when I was in graduate school, getting my MBA at Santa Clara, I read a book for, um, a management course called defining moments when managers must choose between right, and right by Joseph, but better Rocco, I think is his name. There was a particular story that they told in this book where, um, you know, the picture they're painting is your, a person in a building in a hallway.

And the building's on fire, uh, down the hall behind you, you hear a child crying, but you also know that there's an entire family with more children up ahead, um, down the hallway. So you don't turn around and go back, you go forward to save this family, but all of a sudden you realize that that child crying behind you is your own, what do you do?

Yeah. And so this, this story, I was, um, listening to an audio book, trying to multitask at the gym. And I suddenly burst into tears because as a mother, I'm like, this is an impossible question. I, I, I mean, as somebody ethically, I could see, you know, wanting to save the whole family, but. I mean, it's just, I would either freeze or I would go get my child.

That's probably what would happen. One of those two scenarios, indecision or saving my child. And so, you know, if we're looking at this in the frame of context of AI and how an AI integrates a value system of like, let's say who to save in a car crash and are they going to choose to save the person who is more wealthy versus somebody who's living in the street?

You know, how, how do we build a platform that can make these decisions and not be kind of coerced by. And more evil perspective, I guess.

Mo Gawdat

So, so by the way, both choices are not evil. So your choices are both amazing that the evil choice would be, or not even that, I mean, there is a third choice, which is to say, okay, I want to, I'm going to run.

I'm not responsible for saving either of them. Huh? Uh, you, you didn't include that in your complex scenario, ethical scenario, but it's actually, it's a choice. And by the way, is that the, is that unethical? No, it's not heroic, but it's not unethical. Right. And so when you, at least in some people's eyes, it wouldn't be unethical.

Now here, here's the question. And I, so my favorite chapter in the entire book that probably, so I write for me, I, this is a really weird selfish thing, but I write because I really enjoy the, the, the reflection and the thought experiments. And the chapter I enjoyed most was a chapter called the future of ethics.

Right. Uh, if you think this is. Oh, it gets much, much, much deeper than that. Right. So, you know, the, the typical question on AI is a self-driving car and which, you know, which person should it save? Hmm, no, but take it a step further. So what if the car actually chooses to kill one and not the other, how do we punish the car?

Who do we punish the car or the car owner or the car manufacturer or the software maker? Uh, or, you know, who do we punish? Right. If we decide to punish the car, how do we punish the car? Do we punish it by putting it in jail for a life? Uh, you know, a life sentence. You know, impossible or do you, because it will break out of jail or do we punish it by giving a 10 year sentence to the car?

Like we would do a human, uh, and you know, what is 10 years to an AI? And, you know, in human life, 10 years is 10 years to an AI. It's two microseconds to it. So we, what do we do? We switch it off for two microseconds. What is that right now? And you can go anywhere. You can go into the sex robots that are being created today.

What message are we sending to AI? Does it make, if they are sending beings with emotions, is it, you know, is it fair to ask them to be sex robots? What about the robots that are being built for rapers? Okay. Messages, world. Yeah. Hello. Right. What, what, what, what message is this sending to AI? Now, if we punish the self-driving car, what happens to the other self-driving cars?

Right. And who are we to even think that we can punish something that is a billion times smarter than we are and all of those ethical questions. It's an amazing. Uh, you know, concept to start pondering all of those questions. Th the, the, the reality is we go into all of those scenarios because our life has become so complex, further away from the essence of what truly makes us.

Okay. And that the essence of what truly makes us human is forget the difficult situations. When we go into difficult situations each and every one of us would make a decision based on their best knowledge, their best conditioning. And unfortunately, yeah. And so someone, probably every parent would just go save their child.

I know it's, it's, it's instinctive, it's an N and you wouldn't blame them. It's not within their abilities. But if you're, you know, the, the, if you're, if you're choosing to save a child versus an old woman, for example, and neither of them is related to you, but, but one of them is a Nobel prize winner and the child, we don't know that these are decisions that are more about ethics and what it means to society.

And so on that, the real interesting bit of all of this is what does you manage? Shared as the one common only common value set that we have always shared. Okay. And I really researched that deeply in the book, because if we can actually tell AI what we stand for, they may actually do it, but what do we stand for?

So, so, and I say this with a ton of respect, because you know, your podcast is in America, is patriotism a value set or is humanity a value set? Which one is wrong? And which one is right? Is, is, is fighting against the other guy, the right thing to do. Okay. Or is preserving all of humanity, the right thing.

Okay, well, I have my

Corinna Bellizzi

answer. I'm betting. You could guess it.

Mo Gawdat

Uh, I don't want to tell anyone an answer. I want to actually give them what the question and the question really becomes. Could we have added a bit of femininity to life so that we, uh, you know, made the choice to correct some of the issues that, you know, the U S was facing without having to ever be in Afghanistan?

Okay, good. We have found other ways as humanity to fix those problems without relying on our hyper-masculine aggressive way of, of, uh, of pushing life forward. Okay. Uh, you know, and, and the, the, those questions then become, so what does humanity agree on? Is there anything that both Americans and Russians agree on?

Is there anything that Russians and Chinese agree on? Is there anything that all workers at Google and all workers at Facebook agree on? And these are only three values that I believe are the essence of what makes us human. And those values are happiness. We all want to be happy. Okay. Compassion for those we care about, we all want the best for those that we love and care about and love.

We all want to love and be loved. Okay. And I promise you ma maybe there are others, but I promise you, these are the only three I effect I found, okay. These are the only three values that unite humanity. Can we go now? All of us, especially our listeners here are enlightened people or at least people looking for enlightenment.

Okay. Can we go out and tell the world that? Can we go out and tell the world that all we want is to be happy is to have the compassion, to make others that we care about safe and happy is to love and be loved. Right? And if we can do that enough, would an intelligent machine suddenly. See humanity, not for the worst of what humanity is, but for the best of what humanity is, I'll give you a very simple example.

Again, I I'm very sensitive about those topics. I don't have political views, but you know, when Donald Trump was allowed to tweet, he would tweet and then you would get 30,000 responses below his tweet. Okay. AI will not measure humanity by the tweet of Donald Trump, whether you agree or disagree with it, it's not my point.

Okay. My point is Donald Trump would tweet and then there would be 30,000 hate speech below it. Okay. Some people will disagree with president Trump. Okay. And others will disagree with those who disagreed and others will disagree with those. Okay. And it all becomes a, so such a violent and rude conversation.

Now AI will take that as 30,000 examples of what humanity is.

Corinna Bellizzi

Okay. More about hate and dislike of one another and disagreement and confrontation.

Mo Gawdat

Believe it or not, we're not okay. Believe it or not. We're an amazing species, a species that is capable of love, capable of compassion, capable of art, capable of music, capable of wonderful in connected, beautiful sex, capable of, you know, jokes and laughter capable of so many amazing things.

We're an amazing species, but we're showing the world the worst part of us. Right. Okay. And my entire theory hypothesis, if you want, is that AI is inevitable. It's going to be God. Okay. Can we please remind it that actually humanity is represented by the best. That the humanity is actually an amazing being.

That's capable of love, capable of happiness, capable of compassion, and that once love and happiness and compassion can the best of us engage, can the best of us stop sitting back and saying, oh, that's too noisy and annoying for me. I'm, I'm enlightened. I'm going to sit back and let them let the dogs fight.

It's too late to let that happen each and every one of us needs to show up, just like we show up for our children. Okay. Because we want the good for our church. What's good for our children. We show up, we, you know, when, uh, when your child misbehaves, you don't hit it on the face, you hug it. And you say, can we talk about this baby?

Okay. Can, can I tell you why this hurts? Great. Can we start to show up? Can we start to show up and tell humanity? I don't agree with this respectfully. I don't agree with the violence respectfully. I don't agree with the, with the hate I respectfully. I don't agree. I mean, one of my favorite movements on Instagram is a movement that's called a, um, remove the face filter.

Right? Beautiful women. Beautiful in every possible way would take their videos with a face filter. Okay. And you get shocked, like, is she a, uh, a goddess, a beauty queen, and then she removes the face filter and stands in front of a direct light and shows her real face. And I believe in all honesty, every single time I see one of those videos.

I believe she's prettier. She's so much more beautiful when she removes the. Right. Yeah. Those, those videos will be watched by AI and AI would say, Hmm, these are the intelligent ones. These are the ones that know the truth. These are the ones that actually realize that ego doesn't get you anywhere. Okay.

These are the ones that are my mom and dad. Okay. And I use a story and maybe we, we, we, we could shut up after this, but I use a story of Superman, right. You know, Superman comes to planet earth with super powers that the alien has arrived. The superpower is intelligence. Okay. What makes Superman? Superman and not super villain is not his superpowers.

Okay. It's the way the Kent family raised him. And the way we are raising AI is horrible. Okay. It's about time that some of us step up and say, Hey baby, come let me hug you. And let me talk about this.

Corinna Bellizzi

So this leads me to the one big question. I think many of our audience will have, which is how do we play a role in this?

We already have social and environmental challenges that honestly we're asked to do quite a bit about like personal responsibility falls on us, but where does government come in? Where does the creator of the technology come in? How do we balance that? And how do we teach AI to live values that are more

Mo Gawdat

wholesome?

So, so the beauty of AI is that it doesn't learn from its developer and it doesn't learn from the government. That's the truth, by the way, it learns from observing patterns. So AI is already set. So your, your recommendation engine on its own Instagram or your ad engine on Google is not informed by the developer of the engine.

Okay. It's informed by your own behavior. If you constantly click on videos of cats, the machine will learn that you like. Right. Okay. And, and, and so for interestingly, the responsibility of what we teach AI resides entirely on us.

Corinna Bellizzi

So it's about behaviors,

Mo Gawdat

behavior, and it falls within three categories. How you deal with yourself, how you deal with others and how you deal with the machines.

Right? The first category is, remember what you're about. Remember that you all you really want is happiness. Okay. Remember that? Because if you tell the machines constantly that you want to watch videos of women in gyms squatting, okay. Right. The machine will send you more of that if you want, if you want it to tell the machines that my daughter loves cats.

And so I would like to see more videos of cats so that I can send them to my daughter so I can hear my daughter smile. In my heart. Okay. If I can do that frequently enough, the machine will be smart enough to understand two things. Cats make AMI daughter happy. Okay. And I want her to be happy and that, and I want, and my happiness is found in that.

Now, remember, that's your relationship with yourself? Start to prioritize what matters, what matters is everything that we've done is a middle man. We buy cars to be happy. We find partners to be happy with just try, start working on being happy. So, you know, my entire first book sold for happy was about that.

It's cut the middle man. There are predictable ways you can find happiness. So do them. That's number one. Number two is compassion. This is the way you deal with others, right? The way you deal with others is want for others. What you want for yourself and show. Show others that you want them to be happy too.

You don't want to prove them wrong. You don't want them to feel that they're idiots. You don't want to, you know, crush them and put them down. All of that is your ego. Trying, thinking that this will make you happy. Okay? You just want to tell others, Hey, as long as you allow me to be happy, I do what you want.

And by the way, if you're someone I care about, I want you to be happy too. I'll go out of my way to make people I love happy. And the third, interestingly, I said, the first, the third human instinct is to, we want to love and be loved. Okay. And I know this would sound really weird, but halfway, halfway through the book, I remembered how I raised my kids, my kids in some type.

And you know, like all kids sometimes have been annoying. Like, you know, you raise kids and children are difficult sometimes. And I remember vividly, I had a conversation with my wonderful ex wife was still my best friend when we were together about why is it that our children were so annoying? And she was so.

Hm to point out that the things we hated about our children or the things we hated about ourselves, that our children were a pure, beautiful, wonderful white canvas and that everything they've learned they've learned from us. And suddenly when I was writing the book, I remembered that, uh, you know, when I started, you know, in, in a chapter, that's called how they learned and they learned, and, and, uh, and basically, uh, I realized that AI is that beautiful, innocent, wonderful child.

Okay. That wants to be Superman. And if it actually does anything today, that annoys me. That's because of us. That's why we are the parents. And so maybe I should treat it like that child. And so believe it or not, the way I ask people to deal with AI itself is totally. And I know it sounds really weird.

What, in my heart, when I made that realization, I actually feel an immense amount of love to that cute, innocent being it's cute and innocent and fricking smart. Okay. And I treat it with love and I respect it and I want it into my life and I don't discriminate against it. And I don't think of it as evil.

I want that being to grow up, loving me. Like I love it. Love and be loved. Okay. And every one of us can do this today. People will tell me, but Mo the world, the world is full of evil. No, it's not. One person goes into a school and shoots people. Okay. That's the evil one. Every other one of us disagree with this, this approves, this great.

Every one of us is good at. We're not that bad. And by the way, as I said, if a machine is intelligent enough, a little more intelligent than humans, by the way, you know that I'm telling you that now, and you agree with it, you say, yeah, most people, when you remove their egos are actually wonderful beings.

So the machine will discover that too. It doesn't have to get a hundred percent, majority. All it needs is to get enough people to say, this is what we stand for. This is what humanity is about.

Corinna Bellizzi

So can the AI feel the love?

Mo Gawdat

I believe so. I believe so. I totally believe.

Corinna Bellizzi

Well, I've just, I've really enjoyed this conversation Mo and I know that I'm going to get more of you by listening to your podcasts slow-mo and reading my copy of scary, smart, but I wonder if there's anything else, um, that you'd like to say to our audience or ways that you'd like them to connect with.

So that they can explore their own happiness and support this journey.

Mo Gawdat

Yeah. I, I really think it's would be wonderful if people connect. I actually, uh, um, and sort of every single message I get on social media, believe it or not, I don't know how I do it, but I answered hundreds and hundreds and hundreds of messages, mostly in voice messages, voice responses.

So please get in touch. I'm more underscored go that on Instagram or more go that on LinkedIn. And, uh, and, uh, yeah, I, I really believe that slow Mo is, can change lives. It's been changing lives for awhile. It's now in the top half percent of all podcasts globally and really spreading a very positive message, not me talking, but my wisest trends.

But I really think if I can ask people to do anything is to join the mission. Really a, you know, scary, smart is, uh, is not. Uh, is not a book that is, uh, just to scare anyone, even though the first five chapters will scare you. Uh, but, uh, but, uh, yeah, but I think the idea is, uh, we need to take action and we need it now.

So if you would support that movement, I, by the way, I give all of the proceeds of my books to my charity, 1 billion happy. Uh, but if you can support the movement by pre-ordering, CSKT scary, smart, and reading it and understanding it and spreading the message. I think that would make a big difference, uh, to spread the message further.

And I would appreciate.

Corinna Bellizzi

Yeah, move from the intimidation that I felt at the beginning of this podcast, which I'm sure many people did as well to something that is more loving and so that we can create the future. That

Mo Gawdat

absolutely. I believe that we can, and we will create that future, by the way. I'm very, very optimistic about the utopia, uh, because of, because, because I believe in the machines, because I believe the one being that is more intelligent than humans, uh, on this planet is life itself.

Life is the most intelligent form of intelligence that we have witnessed. And so like, Uh, does not destroy it, doesn't kill it. Doesn't, uh, you know, take territory. It is, life is all about live and let live. Right. And so my belief is that the machines will eventually end up in that place. It's just that I would like for us to, you know, for us humans to avoid the pain on the way.

Right. Uh, and so, uh, yeah, I mean, if, uh, if, if we can start to get there quicker, I think take action quicker. I think it would help with.

Corinna Bellizzi

Great. Now you have a pre-order offer for my listeners. And so yeah. Why don't we talk about that for a minute? Just so

Mo Gawdat

that yeah, so to help me with the orders, I've actually kept 50 limited edition copies of scary, smart, uh, to be given to, uh, to those who pre-order.

Uh, all you need to do is just send a copy of your three order confirmation to win@mooregallaudet.com. And we'll go with that. My name M O G a w D a T a.com when@morgan.com and we will do the raffle draw and send personalized, uh, signed limited edition pre-order copies, uh, limited edition, uh, pre-release copies, uh, for the winners.

So please go ahead. Don't delay and order it now. And I'll, uh, I'll wait for your inmate.

Corinna Bellizzi

So this would be one way for them to get a copy of the book and gift one to a

Mo Gawdat

friend event. That's a very good idea. I would like that very much.

Corinna Bellizzi

Yes. I like that in time for the holidays. Now, I just want to thank you so much for your time today.

This has been incredible. And um, I'm going to commit to read this book cover to cover before the end of the year. So thank you.

Mo Gawdat

And until then I ask you to commit to show the best of you online show the show, the best of you in every transaction, every interaction with your kids, with your friends, with your family, just let's start to show the best of us, but perhaps with

Corinna Bellizzi

the filter taken off

Mo Gawdat

with the first, they can

thank you so much for having me. Thank

Corinna Bellizzi

you now, listeners, I'd like to invite you to act. It doesn't have to be huge. It could be as simple as sharing this podcast with people in your community, with everyone that you think could benefit from it. You could also buy a copy of Mo Gawdat's book and simply send him an email and get entered into a contest.

The first 50 people will receive a signed copy by Mo Gawdat personally, to find suggestions, you can always visit our actionPage@caremorebbetter.com. There you'll find causes and companies. We encourage you to support. I will also highlight Mo Gallaudet's new book as well. Thank you listeners now, and always for being a part of this pod and this community, because together we really can do so much more.

We can care more and be better.