AI Innovation and Impact: A Conversation with Noah Kenney

Blog

AI Innovation and Impact: A Conversation with Noah Kenney

Introduction

Brad Banyas:

All right, everybody. Welcome back. You're listening to Play the King & Win the Day, and today's guest is a powerhouse in the world of AI and tech innovation. He's the president of Disruptive AI Lab, the creator of the Global Artificial Intelligence Framework, and a former founder of a successful digital 520 agency acquired in 2023. Whether Noah is building AI to diagnose pneumonia or shaping ethical standards for global tech, Noah Kenney is at the forefront of future-thinking technology. So let's dive into a conversation that spans innovation, impact, and integrity. So, Noah Kenney, welcome to Play the King.

Noah Kenney:

Thanks so much for having me, Brad.

Brad Banyas:

Yeah, so I was really excited, Noah, just because of your background—obviously, from delivering technology in a commercial space to being a scientist and academic and doing things in healthcare. I'm really excited about your perspective on what's going on in the AI space and what's possible, not only from research or healthcare but just in the general public. So, tell us about yourself, man.

From Marketing to Ethical AI Leadership: The Journey of Noah Kenney

Noah Kenney:

Sure. Yeah. So I appreciate the intro. As you mentioned, I kind of have worked in a few different fields or sectors here. About half my work is in research and development on the AI side, and, in most sense, what I refer to as high-risk and regulated spaces. So I do work in health care, I do work in the financial sector, and in education.

And these sectors are really interesting problems because mistakes in health care, for example, they have a cost in terms of compliance, but they also have a cost in terms of human life. And so these are some really challenging problems in AI, and I work on the research and development of some of those tools. A lot of the other work I do, kind of on the AI side, is in privacy engineering—so the kind of emerging field of AI privacy engineering. How do we make sure that these models are safe and secure for us to use?

And then, on the opposite side, I do work in consulting for marketing and IT, and getting to work with a lot of business owners and executives, where a lot of that work has now shifted to AI and automation, right? With clients wanting to figure out how to utilize these technologies. So kind of been a shift in the work I've done from formerly doing a lot on the marketing side and building websites and things like that to a lot more today in kind of the research and development side of AI and solutions engineering for clients. So it's been an interesting shift.

The Interface That Took AI Mainstream: Why ChatGPT Worked

Brad Banyas:

Yeah, well. It’s you and I were talking before. I mean, until ChatGPT kind of got launched into the world, the general public didn’t really know what AI could do for it, right? I mean, now it can write their term papers or their podcast notes or do some of those things, but there are really some deeper things going on that are going to impact the world with AI, and I know you’re involved in those.

Noah Kenney:

Yeah, it’s fascinating, right? Because there’s this new technology that comes out and everybody goes, “This is revolutionary.” And it’s like, well, really the technology has existed for a lot of years; what has changed is the user interface. And I think, that’s the most brilliant thing ChatGPT did (and doesn’t get nearly enough credit for): taking a very technical system and putting this really user-friendly front end on it, right?

That’s what really allowed the mainstream public to utilize this technology. If ChatGPT had a complex user interface, if you had to run it in your terminal, we probably wouldn’t have seen anywhere near the user growth that we did see. So I think that was something truly revolutionary: just taking AI and putting it in something that didn’t look scary to people.

Addressing AI Fears: Balancing Risks and Opportunities

Brad Banyas:

Right. Well, everybody thinks there's a lot of fear around AI. I get home to my wife or whatever, like, oh, AI is going to take over the world or cyborgs are going to be, so I mean, I think, there's a lot of confusion around the technology, what it can do, what it will do.

Do you have any thoughts on that, just for the general public that's worried that the cyborgs are going to oversee us? Is there a lot of truth to that? Is it just kind of fear because people don't understand kind of how the technology is being used?

Noah Kenney:

Yeah, so I kind of have a middle-of-the-road type of perspective, but I'll explain just a little bit of how I got there. I really hear extremes in my job. I've heard some people say AI is going to benefit us in all these ways; it's going to be an incredible technology, and it's only going to do good things. Then I hear the opposite side, which is that AI is going to kill us all and destroy humanity.

I've landed somewhere in between these two perspectives. People talk about dangers of AI, and there are a lot of different types of danger, right? I kind of want to break them up a little bit. We have physical danger, which is AI actually causing us some form of real-world harm. We have the danger of misinformation, meaning information being conveyed to us in a manner that's not fully accurate. We have data-privacy risks, and we have economic risks and harms. So there are a lot of different types of danger that we can see. I would put danger in quotes when referring to AI models, and some of these are more valid than others. I think, in terms of AI actually destroying humanity or causing us immense harm, the risk is very low.

Regarding some of the other forms of danger, such as the risk of misinformation, that is a more valid concern as we see AI being incorporated into social-media algorithms, news outlets, and similar places where we get information from AI models that inherently carry some form of bias. There is danger that comes with that.

That's a danger we've lived with, right? Every person has their own bias. What's different with AI is that many of the biases are the same because the models are developed using similar data sets. By the time the mainstream user ends up seeing the output of that model, if that is all that is known as truth, we do have an issue with bias there.

We have economic dangers, right? And people worry about losing their jobs. I'm honest about it: there are going to be jobs that are replaced by AI, absolutely. But people talk about that happening and everybody losing their jobs immediately, and I think the vast majority of people are going to be able to live the rest of their careers working in the job they're currently in.

If you look at trucking as an example, this is an industry where there's a lot of work to replace truckers with AI-powered vehicles and self-driving autonomous level-five vehicles. There are people who are pushing back against that, saying we are going to lose these career options, but the counterpoint is that trucking is a very hard industry. It's hard on your body. The life expectancy of a trucker is significantly lower than the general population. They work, on average, 60 hours a week, they spend very little time at home, they sleep in their cabs, and they eat junk food because it's all that's available on the road. It's a tough job.

What I really think is going to happen is that anybody who's currently a trucker who wants to be a trucker for the rest of their life has the ability to do so as we slowly replace fleets of vehicles with autonomous trucks. But we probably won't raise up another generation of truckers; we'll probably have the next generation be autonomous. So anybody currently in that field has the ability to stay in that field, and for the next generation, we raise up a team of people to work on building, testing, and deploying self-driving and autonomous vehicles.

And so, when we do that, those are better jobs, right? So, is there a danger of losing your job? Potentially. But I think if you want to stay in your job, you're not going to be forced out by AI in the foreseeable future. Eventually, we'll have additional career options that are probably actually better, easier-on-your-body options. So, a lot of the dangers I think do exist in some capacity, or fears that people have, are valid.

But maybe overstated, right? And that's where I say I kind of have that middle-of-the-road approach.

The Roadblocks to Autonomous Vehicles: Trust, Liability, and Adoption

Brad Banyas:

Yeah, which I think is smart. I mean, say to the center and look at kind of the extremes on both sides. But I mean, you could argue that. I mean autopilots have been around for quite a while for commercial airlines. I mean, you can land and take off in an airplane without the pilots assisting.

So, I mean, some of that's been going on for a while. It's just whether the 300 passengers will feel comfortable that there's no one up front, right? And so some of that's been going on for a while.

Noah Kenney:

Absolutely. Yeah. You know, what’s really the challenge? People have asked me several times why do we not see self-driving vehicles more at scale, right? Effectively, why are they not out there? We could go into the weeds of that, but at a high level, this is a challenge with regulation and a challenge with user adoption. That’s been the case with AI for a lot of years. I kind of alluded to it when talking about ChatGPT earlier.

Obviously, there was a lot of technology behind it, too, but we see the same thing across the board. Self-driving vehicles, statistically per million miles, are now safer than a human driver, and yet every time one crashes, there’s a news article written about it. We don’t write a news article about every accident between human drivers; we write news articles about the really serious ones.

Brad Banyas:

Right, right.

Okay. Well, it’s 100%. If you drive a vehicle, it’s a 100% chance, statistically, that sometime in your life you will be in an accident. So that’s 100%.

Noah Kenney:

Correct. Yeah. So we have to get to the point, if we’re going to have wide-scale adoption of this and trust in it, that we don’t write a news article every time one crashes, right? Every time one crashes, there isn’t a call to completely change the regulation. And that’s the same thing we see now with human drivers. When there’s an accident, there’s almost never a call to change regulation, right? People aren’t saying, “If we had 20 more airbags in the vehicle, then maybe ….”

We’re kind of okay with the way vehicles are, right? We’ve gotten comfortable with them. We trust them enough to recognize these things happen, and it doesn’t mean we don’t try to create safer vehicles, add blind-spot monitors, and do other things to improve vehicle safety. I’m not suggesting otherwise, but there is a very different perception in the way people view AI, whether consciously or subconsciously.

Where they don't trust it yet. And you have to trust the technology before you can have wide-scale adoption of it, right? So that's a difference, yeah.

Brad Banyas:

Absolutely. Yeah, and it impacts the whole supply chain just around the current vehicles. I mean, let's take drunk driving, for example, right? I mean, it'd be amazing if you're out drinking and your autonomous vehicle just comes and picks you up and takes you home. Well, if we're all doing that, the DUIs would drop to zero, right? Well, behind that, there are bail bondsmen, there are attorneys, there's insurance.

There's this whole thing of supply chains that happens behind each individual industry, that there's a lot of money being made behind that, and some of these things, if you move to these autonomous cars, that'll wipe out some of these industries that make quite a bit of money off DUIs. So it's interesting. I like to drive.

I just like the feeling of driving. I like it. So when I see people who can do that, I'm like, don't you want to drive? Don't want to, you know, so.

Noah Kenney:

Right. Yeah, not in the same way, right? I think that for a long time, at least there's going to be a choice, right? In the marketplace. And I think that's a good thing. But there's a whole bunch of other things too that, if you look at, say the legal system, and you go, well, who is responsible if an autonomous vehicle crashes, right? Who carries the liability for that?

These are the forms of questions that are not yet answered in a legal context. And we see similar questions in terms of copyright questions, right? With AI models. And yeah, there are a lot of questions that are not yet answered that will take years to really get answers to in a concrete way. And a lot of which will probably end up going through the Supreme Court in the United States to determine the answers to these questions.

And so. Until we have some of those answers, it becomes really hard for certain AI technologies to scale in a widespread mainstream manner. Because if they did, there just wouldn't be the systems and the infrastructure to support it. And even the road system, right? If you look at the road system, it was designed for human drivers, not autonomous vehicles.

Intellectual Property and AI: Navigating a New Landscape

Noah Kenney:

There are some countries, for example, where they're looking at actually changing road systems to not use road signs, to not use colors, and to use different forms of signals that are easier for autonomous vehicles to interpret. That requires a large portion of the population to kind of get on board with that idea, right? And so buy-in is critical here. Yeah.

Brad Banyas:

Absolutely, and I'm really interested—even from an intellectual-property perspective. We're in the software business, consulting business; we create things that are IP, and that's value, right? When you get into it, I've seen some of the things across different industries, like the music industry and publishing industry—people are using AI to help write music, write an article, or write a book, right?

So the real question is, okay, is that just because I typed in “Write me a book on Brad Banyas”? Is that mine? Is that truly my creation? Because, in some ways—unless I go in and change it and do a lot of things—it's really not. And I know the music industry is getting in an uproar because you can have some of these AI tools write a song, lay the music to it. And so it's really interesting to see how that's gonna impact culture from an artist's perspective. I know we're talking tech typically; it's scary to some of the artists who spend their time doing that.

Noah Kenney:

It really is. Yeah, I think there's the legal question, right, which we could talk about—the legal side of who owns the intellectual property and who should own the intellectual property. But there's the second side of it, which is: do we want an AI to be able to own intellectual property? And I think what really comes behind that is that most of these AI models are actually trained on user-generated content.

And so ChatGPT is trained on Wikipedia. You have Grok, you have Claude, you have other models that are trained on Twitter—now called X; you have them trained on Reddit; you have them trained on Quora. And so this is all different forms of user-generated content that are training these large-language models. Is that problematic? No, except that people need to be incentivized to create user-generated content.

And so if you look at content that's created, say The New York Times is publishing content—they're publishing with the expectation that they can monetize it through advertisement. If they can't monetize it through advertisement, then they will no longer create content, right? And if they no longer create content, then what do we feed into large-language models? The same is true with artistic works, right? We have very creative people who have spent a lot of years honing their craft.

And they're creating these artistic works. If ultimately those works are not going to be monetized and they end up in the output of some large-language model, then people's incentives to create this form of art are going to be severely diminished. At that point, we may just start to lose a portion of it, right? So it's similar to what we've seen with handmade goods, where there is a marketplace for it—you can see it on Etsy, and you can see it in small farmers' shops—but it's not nearly as mainstream as it was before, right?

It used to be that there was only a market for handmade goods, and because the incentives were too low, the purchase prices were not substantially high enough—people switched to mainstream manufacturing, and we kind of lost some of that craft: carpentry, pottery making, and other things like that. It's slowly come back in a very niche market through marketplaces like Etsy. So I think we're kind of gearing up for the same sort of thing with AI models.

The Human Element: Storytelling in an AI-Driven World

Brad Banyas:

Yeah, I think from a cultural perspective or just from a humanistic side of it, I've kind of seen to the point that people that it will make, in my mind, us go back to a lot more human interaction and to be valued a little bit more because it's real, right? It's real. Your conversation's real. Like this is real. You and I, this is not AI-generated. You and I are talking. You and I are exchanging ideas.

And I feel organizations are moving more to where they're trying to bring more people into a group, more people who have similar ideas or whatever. I see that actually becoming what it's been devalued to. The art of storytelling, in my mind, has been diminished and pushed down and down. The art of storytelling and interaction—I see a lot of this going on with AI, which is going to be even more valued.

So to your point, if you can crochet me a blanket, one day I'm going to want that to me really does mean something more to the human side.

Noah Kenney:

Absolutely. Yeah, we see this even on social media right now, right? If you look at LinkedIn, you can see such a high percentage of content that's generated by AI, and it's factual content, right? There's nothing wrong with the content. But the content that's getting engagement is content where people are telling stories. Its content is where people are giving pictures of their life and funny anecdotes.

And I've done work with a lot of nonprofits, and what we see is that every nonprofit has some form of mission at the core of it, but content talking about the problem they're trying to solve is not nearly as effective as stories from the people they've impacted, right? Saying, “Here's how my life was changed.” So, ultimately, as people, we think we desire that human connection, and AI is not going to replace that, right? Hard as it may try, it can help us. And, in some ways, it can free us up, actually, to focus more on that human side of things, right? So, definitely, benefits there, but it has its place, and we, as humans, have the ability to do something that AI can never do, which is to connect and to tell stories in a human way. So, yeah.

The Bias You Don’t See: Why AI Isn’t as Neutral as You Think

Brad Banyas:

Absolutely—and I’m not that fearful of it. I think it will actually force you to communicate more clearly and be more truthful, to be honest with you. You were alluding earlier to the biased feeds in the learning models. I mean, that happens today with the media—depending on what side you’re on.

What side are you on—the left, the right, the middle, or whatever they want to call it? I mean, you’re already being fed things that are biased anyway. You have to discern. I don’t see much difference in that, except the volume and the scale are a lot worse from an AI perspective. But common sense, hopefully—and sometimes, as my dad used to say, “Brad, common sense is not common,” you know.

Noah Kenney:

Right. Yeah, it’s largely the same, right? I think the big difference I see is that we’ve existed in a culture where we know what biases we’re being told, and we know—based on the media outlet—that they tend to swing this way or that way. With AI models, what’s fascinating is that we don’t necessarily know the bias of the model, and we kind of assume, when using it, that it’s neutral—and it’s not neutral.

So there is a form of hidden bias that comes in here. Part of that comes from the data we have; part of it comes from the developers. There’s a general lack of diversity among the developers of AI models, and that leads to bias at the foundational level of the model. If you’re using a model that has foundational-level bias and you’re unaware of it, then what ends up happening is the information you’re seeing leans one direction or another, and you view it as the middle.

That’s happened with social media too, right? It’s kind of an echo chamber of your own ideas being bounced back to you—you think everybody agrees with you or everybody disagrees with you, depending on which content you follow. It’s very hard to get neutral. So, if you ask, “How do we find something neutral?”—can you name one completely neutral news source? Most people say no. AI models are the same way: you can’t find a model that’s completely neutral, but you don’t know its bias. That’s challenging because, if you know the bias, you can counteract it to some extent. Whether you care enough to counteract it and find alternative viewpoints is another question.

Brad Banyas:

Yeah, absolutely. I always tell everybody, “Don’t argue with a bot—it doesn’t care.” That may not even be a person behind it. You felt good because you expressed yourself and said you didn’t believe that opinion, but the reality is that it may not have been a human. So save your arguments for someone in the same room—it might be more effective. Then you can get your point across.

Noah Kenney:

Exactly—choose your battles, right? Some people want to argue for the sake of arguing, some to express their ideas, and some to change other people’s ideas. If you argue with a bot, you really can’t do any of those three—except share your own ideas.

Brad Banyas:

Yeah, absolutely. And make some other bots matter.

What Matters Most: Noah Kenney on Using AI Where It Counts

Brad Banyas:

So what excites you right now? You’re doing so much, and I really want to get what you’re excited about right now, because I know you’re still, like, 50%—you said academia, research; 50% commercial product—helping people out there in the commercial world figure out what to do with this, which learning models to use. What excites you right now? I mean, obviously, it’s not new, but it’s new from an adoption and kind of an explosion perspective. So there’s gonna be unlimited opportunities around AI, applications for AI. So what excites you?

Noah Kenney:

Yeah, I think the most exciting thing to me is that we’re seeing AI being used for higher-impact tasks than historically it’s been used for. I think it’s cool that we can generate content with AI—that’s what a lot of people spend time talking about—but there are a lot of really impactful tasks that can be done outside of just content generation.

A lot of my work has been in some of those spaces, and as I mentioned earlier, these high-risk and regulated spaces. So, if you take healthcare, for example, being able to do medical diagnosis using AI—and some people say, “Why do we want to replace doctors?”—and it’s like, we don’t want to replace doctors, right? But here are the statistics: one is that you have people in third-world countries who don’t have any access to modern medicine.

A lot of them are taking pictures of some form of medical imaging, sending them to doctors in the U.S., and waiting three days for a response. That’s not a good patient-care experience. So we have that issue. We have the issue of the cost of medicine currently, and when I say medicine, I’m not necessarily referring specifically to medications, but to the cost of medical care, which is extremely high. Then we have accuracy, right? If you have, say, three conditions at the same time, that specific set of three conditions may be something your doctor has never seen before, and it makes it very challenging to diagnose. We have the ability to see patterns, which AI can do. If you look at something like COVID, where we have a global pandemic and we’re saying, “Okay, this is a unique set of symptoms that is manifesting in people, and we have these markers to track it—well, you can start to see the pattern of how that is evolving across the entire United States from a centralized source in an AI model versus individual doctors seeing one-off things and operating in a silo. So the advancement of medical research, when you have these AI-assistive technologies, is much faster.

Patient care is much faster, accuracy is much higher, costs are reduced, right? And then you get doctors away from doing all the admin tasks. They’re spending time doing higher-impact things that they desire to do—focusing more on patient care. We’ve all had the experience of going to a doctor and having horrible patient care because the doctor is an hour late to the appointment because they’re booked back-to-back, right?

And wouldn’t it be great if the doctor could diagnose you quicker and actually spend a little longer in the room answering your questions? So I use medicine as an example there because I think it’s one of the highest-impact areas, but across the board, just seeing AI used in really high-impact tasks is super exciting.

Brad Banyas:

Yeah, from this diagnosis, people overlook that. I mean, you could have something that mimics symptoms of COVID, but it was very interesting how flu cases went to about zero during COVID. I’m not a scientist, so don’t take my word for it—I’m not “Dr. Brad”—but that’s almost impossible: you have 40 million flu cases, and the next year you have less than a million or whatever it was.

That’s a misdiagnosis in some cases, right? Unless the flu just decided it’s gonna take a back step to COVID this year. Things like that—having AI to really say, “No, this guy has the flu,” right?

Noah Kenney:

Right, absolutely. But anyway, sorry I had to put that in. And just to improve confidence, right? When a doctor is diagnosing something, maybe something they haven’t seen before. A lot of my work has been in the medical field. I’ve done work on pneumonia diagnosis, moved into COVID diagnosis, and preliminary cancer screening using medical imaging. In the work we did on pneumoniat, this is very common; it’s common for people to have pneumonia—we’re not looking at some crazy, complex diagnosis. Even so, in the United States—very medically advanced—at the general-practitioner level, the diagnosis accuracy for pneumonia is about 55%. At the radiologist level, it’s 86%. And this is in the United States. If you look in third-world countries, pneumonia and misdiagnosis of pneumonia are the leading causes of death for children under the age of five. But this is a very treatable condition, and it should not be taking nearly as many lives as it is.

Unfortunately, misdiagnosis is very prevalent, and that includes misdiagnosis from highly qualified doctors in a medically advanced country like the United States.

So, to have an application where you can scan a picture and get an instant diagnosis from a chest X-ray does provide a lot of value, both to the patient and to the doctor for confirmation. It helps in terms of liability: if they’re misdiagnosed, doctors get sued, on average, more than twice in their career for malpractice or negligence; that’s not something doctors sign up for, and the vast majority of them are well-meaning.

They’re here to save lives, and they’re getting sued for malpractice. If you have some sort of assistive tool that can confirm that your diagnosis was the best based on the available information, that gives accountability to doctors on the patient side and also helps the doctors on the liability side. So I think these tools really do help both sides, but, yeah, and I think there’s adoption—hopefully, yeah.

Brad Banyas:

Lowers insurance costs for average people, right? Lowers healthcare costs.

Noah Kenney:

Absolutely, right. Yeah—why is it so expensive, right? My background: I trained as an economist, and if you look at the economy as a whole, insurance has one of the largest impacts in the global economy and in the domestic economy. So it is a big deal. And if you can start with medicine, that is a very impactful task.

It impacts longevity of life; it impacts your overall happiness, but it also impacts the economy, and that circles back to everybody’s job. Whether you have one of the conditions that AI is assisting in or not, these are important technologies—and there are other industries too: finance, education, other high-impact industries, housing, where we’re seeing AI utilized. So I think it’s super exciting.

Brad Banyas:

Yeah, that’s great. I mean, it’s really interesting to see you across all these different verticals and areas. That’s why—your perspective to me is—you can find someone you can trust with a perspective that spans all of these different fields, right? Not just spinning aside or trying to find the best widget to write my PowerPoint better. I mean, there’s actually a deeper thought process to it.

The Truth About Agentic AI: How Autonomous Are These Systems?

Brad Banyas:

So, from a business perspective—since a lot of our audience are business people, entrepreneurs, executives, IT-type people—what are some of the things? I’ve been trying to get up to speed on agentic AI, right? When to pass off a task to an agent—an AI agent—and when to use a customer-service rep, or a support person, or a technician. Are you involved in any of the agentic-AI work in those areas?

Noah Kenney:

Yeah, I am, to some extent. It’s funny you ask—I did an interview yesterday at a live event, and they were asking, you know, “Do AI agents actually exist today? How do we define that?” Maybe I should start there and talk about what these are and whether they exist.

A lot of people define the term differently. I use agentic AI to refer to a primarily autonomous system, with very little human involvement, that’s able to make decisions and do so in a multistep manner. What do I mean by that? Well, imagine you have five steps to execute a decision: agentic AI is capable of doing all five steps without any human involvement and getting from the start to the destination by itself.

Today, I think we’re still a ways from truly seeing that. Many tools claim to be AI agents, but they break down because, first, they still operate within specific “context windows.” By that, I mean they can function only within the limited confines of tasks. For customer service, for example, they handle tasks for a specific type of business; as long as you stay within that narrow scope, they do okay, but it’s still narrow.

Real-world autonomy requires situational adaptability and self-assessment. By situational adaptability, I mean the ability to be thrown into an environment that didn’t exist before and respond appropriately. Let me give you an example, because that’s abstract.

In the financial sector, AI models—and so-called agents—are currently trained on the historical economy, on its ebbs and flows. If something happens today that completely disrupts our entire economic system—so that today’s economy looks nothing like yesterday’s—every financial advisor in the United States will look at that and, if you call them, might say, “We don’t know how to respond.” They certainly won’t rely only on historical data or the context window of their training.

Brad Banyas:

Right.

Noah Kenney:

AI models aren’t there yet. They don’t have that adaptability or self-assessment. They rely on what they’ve been trained on. Good experts rely on education, experience, and intuition—intuition that comes from both. They can say, “I just know this isn’t going to work,” even if they can’t fully articulate why.

When we look at agentic AI, I use this example: Say you’re posting content on social media and want to post on LinkedIn. You might ask a model, “Come up with 10 topics related to software development for a LinkedIn post.” It gives you 10; you pick one and say, “Write about number seven.” Then you add, “Write it in a friendly tone—here’s my background, here’s the word choice I like, here’s what’s trending.” It writes the post. Then you revise: “Remove some emojis; make it more concise.” Finally, you choose what time of day to post.

Time of day affects performance; edits affect performance; hashtags affect performance.

When you tell the model, “Use hashtags,” you decide to use hashtags—you made that decision. People think the model does all the work, but you decided to post, picked the topic, chose the hashtags, picked the time, and so on. This isn’t an autonomous system; it’s not truly agentic. That doesn’t mean it’s bad or useless—just that, for clarity, many tools claiming autonomy aren’t actually autonomous.

The only areas where we’re getting close to true autonomy are specific sectors like self-driving vehicles. But even there, we ask: Do we want full agency? We also talk about wanting control over AI models. Do we really want to be locked in a self-driving vehicle with no override button? Do we want it posting content without us reading it first, with our name attached? So the question becomes: How much agency do we actually want these models to have, and is full autonomy really the goal?

How to Use AI Wisely: A Practical Framework for Business Leaders

Brad Banyas:

No, yeah, I love it. And your background too, it's funny because we're in the tech business like yourself, but you don't know, like, there's a lot of money that was put into autonomous cars too, by the way—I don't know how much, but a lot, right?

And so, you know, everyone who comes out with the next agentic AI is going to take over all your customer support, whatever.

You don't know how much of that, just like you were saying, is really kind of hype, because we all want to go to the fact that we want our time back. Sometimes we don't want to put the effort into it, so that would just be great—can get rid of these 20 call-center people, and that's awesome. Nobody's complaining about the guy going on vacation, or they're sick, or whatever it is. The reality of that—like what I'm hearing you saying, which I always have doubts about too, maybe it'll get there one day, I don't know.

Noah Kenney:

Right, yeah. It's crazy, too, how easy it is to break really sophisticated systems. And so, you look at autonomous vehicles: if you pour salt around an autonomous vehicle in a circle, it will not drive over the salt. The reason for that is that it views it as a white line, right? It views it as a line it's not supposed to cross. And so, you can make an autonomous vehicle stand still by pouring a circle of salt around it.

Brad Banyas:

You just got everyone out there right now who’s against Tesla is gonna get a bunch of salt.

Noah Kenney:

Yeah, well, it's crazy though, right? Like how sophisticated the system is, and yet there's no way to just tell it, “No, this is salt and you can drive over it.” Right? So there are things like that that you can't possibly, as an engineer, think of all the things and still miss something, right? And it's always something like that. And so, what do you do if you have? When they test autonomous vehicles, they'll put the vehicle there and they'll simulate a car moving into your lane by putting a wall—a movable wall—and pushing it closer and closer to the vehicle and seeing how the vehicle adapts, right?

And I was sitting in a meeting where I was kind of consulting on this, and I asked the guy, “What happens if you take a wall and you put it on both sides of the vehicle and you move it towards the vehicle at the same time? So, does the vehicle stop? Does it speed up? Does it stay still and just get smushed? What does it do?” And he said, “We've never tested that.” And I said, “Well, what should it do?” And he goes, “I have no idea what it should do.”

That’s a really critical point, I think, to make, which is that before the thing can do what it's supposed to do, somebody has to make a decision about what it is that it's actually supposed to do. And this is true in businesses, too, right? And a lot of your audience can maybe relate to that. It's like, well, we want to use AI, but then they kind of evaluate the model and they're like, “Well, I'm not sure what we think of the output.”

And it's like, well, before you use AI, I kind of have a framework: figure out why you're trying to use AI, right? And the framework there is effectively like this—does this improve my customer user experience? Right, that's first and foremost, because a lot of people get AI because, to the things you said earlier, it saves time, it's going to save money, it's going to improve efficiency.

Well, none of your users or customers care about the business benefits you experience from automating a process. So, first and foremost, the question is, does this create a better or worse user experience? And if the answer is that it creates a worse user experience, then it doesn't really matter what the business benefits are—you probably shouldn't implement the system.

Now, a second thing: let's say that it does have some sort of benefit to the user. Now you look, and you go, well, does it have business benefits?

If the answer is yes and it's a clear choice, assuming it's not cost-prohibitive, that you want to implement the system. If it doesn't have business benefits, you may still want to implement the system; it may just be that you're paying for a more premium user experience, right? Now users can contact your business 24/7 instead of only from nine to five—that's worth paying extra for. So, figuring out first, should we implement AI?

And for a lot of people, that means not taking the approach of “We want to use AI and we're not sure how.” It means, “We have this problem, and is AI the best solution to solve the problem?” And if the answer is yes, then it's worth implementing it and using it, right? But before you implement and use it, you have to predetermine what that goal is, what the objective is, and what the success measure is.

And look at all metrics, right? People look at the metrics, and they go, “Well, it promises this for accuracy.” Yeah, it promises that for accuracy, sure—but what else comes with that? Maybe the accuracy is really high, but the reliability is really low, right? Maybe the time to respond is much faster, and it's like, yes, it does save your team time, but actually customers are not as satisfied dealing with an AI bot as they are with a human, right? It's not effectively solving problems.

So, in that case, you have to look at all the metrics, and what every AI tool tries to do is pick and choose a metric to sell you on—and they go, it's accurate, it's reliable, it saves you ten hours a week, you know, 95 % of questions can be answered by it. That doesn't mean the questions are answered well, right? Who determines whether it was a good input or a good output?

And so my recommendation is, if you're trying to create a ChatGPT bot, let's say, or use some form of automated help-desk tool, customer support is a big use case of AI—track every question that you get coming into your business for 30 days. Take those questions, document them in an Excel spreadsheet, the question and how you answered it, and now feed those inputs into various AI models and see what the outputs are.

And determine if you, as a customer, not as a business owner, not as an executive, but as a customer, would be equally satisfied with the outputs. A good way to do this is to take the AI output, take your output, put them in separate columns in an Excel sheet, and have an independent team member go through and rate them without knowing which one was human and which one was AI, and really determine, is this a good tool? Does this adequately answer our customers' questions?

And so I think a lot of people are implementing AI prematurely. And that sounds probably funny because there are a lot of people who feel like they're behind, like, “We haven't implemented AI at all.”

And it's like, yeah, but you run a hot-dog stand; you probably don't need AI. But, I mean, you'd be shocked, right? I've had companies call me and they're like, “We want to use AI,” and literally they're in industries where I'm like, you don't need AI.

Brad Banyas:

Well, it's an awesome point, and it's just good, sound business advice—what you said. But it's always like, if you know, in the tech space, you always feel like it's quick, it's moving fast, and you don't want to miss out, right? And so everybody jumps on board, and that's all anybody's talking about—the news, the media—AI, AI, AI, AI, whatever: this learning model, this learning model, this one's best, whatever. I just saw where someone was about to buy somebody for $3 billion.

But, it's a big money exchange right now. Everyone knows that it's going to be very important in the future, and it's like that feeling of missing out. But you and I talked a little bit before: I really, adoption—we use Zoom and Webex as an example—during, hate talking about COVID, but during that time. Zoom stock exploded, right? And everyone had to be on a Zoom call.

Well, there was a thing called Webex—it’s been around for 20 years; it's Zoom, right? But nothing had forced the adoption like being locked up and having to be remote.

Noah Kenney:

Yeah, absolutely. Yeah, it becomes kind of a user-adoption question in a lot of cases. And you don't have to use AI just because everybody else is, right? It's important to recognize that there are cases where people go, “Yes, this saves our team time.” And I say, “Absolutely, it did save your team time.

But if you're still keeping the team members around,” which is what I see most of the time happening you actually increased your expenses, right? And maybe there's still benefits to that—maybe the team is happier, maybe that retention is longer—but you have to really look at these things, and it ultimately should be a quantifiable decision where you can go, “Here's the reasons why we implemented this AI model and here's the reasons why it makes sense for our business.” And if you can't answer those questions, then you probably shouldn't be implementing the AI model, at least not yet.

Brad Banyas:

Yeah, that's great. That's great advice from an AI guru, not only from a scientific perspective, and an academic and a business side. I think it comes down to common sense: don't get caught up in all the hoopla or hype, but understand that it can help you in certain areas and where to invest your dollars, which is smarter. It's so funny—we always feel like we're so busy.

Right? And we don't have any time, and we don't have—we're trying to peel back more time, and the reality is all you're gaining is more leisure time. So you weren't really that busy in the first place, then, and it's funny how we all think we just don't have time for anything.

Noah Kenney:

Yeah, absolutely. AI does not fix human problems and challenges that have existed, right? And so it's like, what are you going to do with the extra time that AI gives you back? Well, that's a question that has existed even before AI models, right? And people always say, “I want more time in my day.” And it's like, what do you want to do with it? And for a lot of people, they don't really have an answer, right? Or the answer is “Do more of the same.”

Can AI get us down to 20 hours a week? Sure. But what are you going to do with the extra 20 hours? And for a lot of people, it's like, well, if you are on LinkedIn and content output has tripled on LinkedIn because there's so much AI-generated content, well, you're going to spend more time reading through AI-generated content, right? So, you have to kind of determine, “I'm okay not to have all the information.” I'm okay to be out of the loop in this area, and those things exist whether you use AI or you don't use AI. Those challenges exist.

Brad Banyas:

Yeah, absolutely. If you're a big book reader or you listen to books, whatever it may be, your brain, really, I mean, the human brain, can only take so much information as well. So it's really having the ability to filter out what's important, what's not important, back to what my goal is—what am I trying to achieve here? I'm not gonna be a neurosurgeon, but maybe I wanna know a little bit about this for whatever reason, one of my family members.

It's okay to do that, but if you've got enough time to read what a neurosurgeon went through—the education—you've got too much time, but that's awesome. Good for you. Now everybody wants to be an expert. That's what Google and Google Search did—and the internet did—is, all of a sudden, everyone who has no idea of something can search something and read a blurb, and now they know it: “I know everything about AI; I read it on Google.”

Noah Kenney:

Effectively, what all these tools do is get us to more effective information retrieval. It's like, Google was replacing and putting in a digital form content that already existed in the form of books and newspapers, right? And now we're seeing it with AI, where it's taking Google Search, and now we have too much information in Google Search, and AI is condensing it down and making it easier for us to actually digest that information.

But really, what we're seeing is more of the same, right? All of these media have replaced each other with the same content. The information is not really changing, but the way that we as users are consuming it is changing—it’s getting more efficient over the course of time, right? So that's really the biggest thing that AI is actually changing: just the way that we retrieve, disseminate, and share information.

Brad Banyas:

Yeah, that's an amazing point, and that is so true. I mean, that really is so true. So, somebody had to create the information at one time. That information is held in disparate databases, as we all do in the tech world. I mean, you were doing this stuff 40, 50 years ago; it's just the speed of it.

Using ChatGPT Like a Pro: Practical Tips for Better Results

Brad Banyas:

I wanted to ask you, do you, as a personal, do you use any kind of productivity hack?

For people who are out there, using anything just as a productivity hack, and AI that you use every day. You don't have to answer that if you're in the research and depths of it.

Noah Kenney:

Yeah, yeah, I think ChatGPT is very helpful, right? It's kind of a classic answer there, but I think what's really helpful is knowing how to use it, right? And so maybe I can just give a couple of quick tips on that, because I think there are a lot of people who are using it and going, “Well, it didn't give me a great output,” right? And I say, treat it like an employee—give it the amount of information that an employee would need to have to do the task well.

And so, a lot of times you give a prompt—it's like two sentences. Well, it's much more helpful if you give it a ton of information, right? And so, how do you get it to write a good social-media post for you? Well, I would go through, scrape all of your past social-media posts that you have written, copy them into a Google Doc—at least the ones that have performed well—and input them into ChatGPT and say, “Write it in the same tone, same sentence structure, and using the same vocabulary as the words in the attached document.”

Look at the length. So if you have an average length of 200 words for your post, well, AI models struggle to get the exact length, but give it a range. So tell it, “Want between 175 and 225 words.” You're going to get much closer to 200 words. Tell it to start with an outline.

It will create an outline because of the way these models are built—they can't really think all the way through completion; they're thinking about the next word. So if you tell it, “Come up with an outline; now write the social post based on it,” right? That helps a lot. You can tell it, “Okay, we want you to write the post at a seventh-grade reading level,” right? Why? Because it knows exactly which words to use for a seventh-grade reading level.

So, to an employee we’d go, “Make it easier to read, use bigger words.” Well, that's hard to tell an AI model, right? Because it didn't have to learn reading in the same way we did. So when we say, “Make it easier to read,” it's like, well, what does that mean? It doesn't necessarily mean use four-letter words for everything; it might mean use shorter sentences, use more emojis, or use special formatting.

But if you tell it these things—like, “Write at a seventh-grade reading level” - that's something very tangible; it now knows which words it can and can't use. So, giving it guidance. And then here's the other thing: the models are designed to learn your behavior. So give it feedback. This is the most critical component that I think we miss—we give team members feedback, both good and bad. If the AI model does something bad, a lot of times people just say, “Forget it,” but they won't actually tell the model, “You didn't finish my task.” And it's okay to tell it, “Here's where your output was lacking, but that's okay—we're gonna move on,” because it now knows not to do the same thing in the future.

And the opposite is true: if it does a really good job, say, “This was really good—here’s why,” and give it five reasons why it's really good. Now it's gonna save it in the memory; it's gonna try to do those things again in the future.

And so if you really go through and invest the time in saying how much information should I give a person to do this task? If you want somebody to write your LinkedIn post for you—another person—you're going to give them your past posts, you're going to give them feedback on the posts, you're going to revise and edit it and give it back to them with comments, and eventually they're going to learn. But with AI models, we just tell it, “Write the post,” and then it misses the mark the first time, and it's like, “Well, that was disappointing.”

I mean, truly, ChatGPT for $20 a month on the premium version is the greatest productivity hack that I think is available right now. But take two days and really invest in learning how to use it. It is a skill—it's a skill to learn how to give these things feedback, how to structure your output: start with the goal at the top—“Here’s what I'm trying to achieve.” Great. “Here's my proposed method of achieving it.” Great. “Here's the constraints that I have,” right—where limitations.

And then, “Here's how I'm going to judge the quality—I'm going to test it on X, Y, or Z.” And it's going to go, “Okay, well, now we're going to build something that meets those constraints.” And so, if you are having it write code, tell it, “I would like you to comment the code. I would like you to name the variables in this way. I would like you to do that.” It will learn. And that's a great thing compared to the vast majority of people.

Brad Banyas:

Absolutely, absolutely. When ChatGPT bots came on—it’s been ten years probably now, it's probably been a decade—but that was the thing: training it was a pain in the ass, right? It took a long time to do it, and now it's not a long time, so it's amazing.

A Global Blueprint for Ethical AI: The Vision Behind the Framework

Brad Banyas:

Well, you have any—like anything else that you personally—I know you've talked about what you were doing in the medical healthcare, finance, and those worlds. So you kind of touched on that. Is there anything that—you personally—are really, like, excited about? Like, what you might do next with AI, you personally? And you've talked about some things about pneumonia and the medical, but is there anything else you want to share or, and if not, you can say you don't want to share it.

Noah Kenney:

Yeah, yeah. I think the Global Artificial Intelligence Framework I've worked on is something that has been a really exciting project that comes under a think tank that I founded a couple of years ago called the Ethical AI Forum. And the goal there was really, let's have good conversations about AI, let's educate people, and let's also help these companies that are really trying to do ethical development, deployment, and utilization of AI models figure out how to do that.

And what we found is—it's really hard for governments to regulate this technology for various reasons. It's really hard for businesses to self-regulate. And ultimately, what we've seen in several industries is that frameworks are really, really helpful. So a good example in cybersecurity is the NIST Cybersecurity Framework, which has now really become an industry standard, right?

And because it's become an industry standard, there are actually cases that have been enforced—legally enforced—saying this is good trade practice, right? And that lets the FTC actually enforce it. And so, I very much hold the belief that we need good frameworks to regulate AI. And so the Global Artificial Intelligence Framework, or GAIF for short, pairs up industry experts in virtually every industry with computer scientists.

We have them effectively sit in a room—whether virtually or physically—and say, “Okay, I'm working in biomedicine; here's how AI could help me, but here's the constraints, here's the concerns we have, here's the compliance challenges we face, here's the data-privacy concerns,” and computer scientists together—AI engineers together—with industry experts, sitting there and collaborating on what a framework looks like that can actually be implemented.

And we're writing it down, and the result is a framework that's about 600 pages long—that goes through both general guidelines for AI development, deployment, and utilization, but also industry-specific. And it's meant to really help provide guidance for people who are trying to figure out: should we use charging within this industry? How do we anonymize data before we upload it? How do we handle private customer information?

And so all the people who are really trying to use this technology in safe, ethical, and responsible ways—I think this framework gives them a really good starting point to do that. We've released a couple of sample sections of it, but we have the full framework coming out later this year.

And so that's been a lot of work to compile and standardize across industries that I know nothing about, but I'm really excited about that framework and just what I think it will be able to help us do, in light of a lack of regulation—to make sure that we have some standards and at least at a minimum, shared language for how we talk about risks and harms coming from AI models and bias and things like that.

Brad Banyas:

That's amazing. If people are out there and they want to get involved, how do they get involved in that?

Noah Kenney:

Yeah, so if they go to—if you just Google the Ethical AI Forum, we have a contact form right on our website. And we also have an entire page on there—it's highlighted at the top—for GAIF. You can actually read the sample section and sign up for a waitlist where you'll get the full framework as soon as it comes out. And anybody who's interested in getting involved as a researcher can also find me on LinkedIn; feel free to shoot me a DM—always happy to collaborate.

We have over 150 researchers working on this project, and so it really is a collective effort of experts from around the globe, not just in the United States. We're working across multiple countries' legislation; we're working with the European Union's AI Act, GDPR, with state-level regulation in the United States. So we designed this to be as comprehensive as humanly possible. And I should say—humanly with AI assistance—possible. So, yeah.

Wrapping Up: Appreciating Noah Kenney’s Insights

Brad Banyas:

Well, you've been amazing, man. And I knew this would be an interesting conversation. Obviously, you're a plethora of value. I laughed before I told you, you lived a lifetime in the last five years, right? I mean, all the things that you've done and where you're at. I mean, we're proud of you. We appreciate you taking the time to be on Play the King & Win the Day.

And I think it was it was excellent. And our fans and listeners will get a lot of value.

So, ladies and gentlemen, Noah Kenney. I don't need to go through what he just said, but Noah, we really appreciate you being on the show and would love to have you back when you guys are ready to talk about some other great things that you've done across all the industries.

Noah Kenney:

Absolutely. Thanks so much for having me, Brad. Appreciate it.

Brad Banyas:

You are amazing. I appreciate it. Take care.

Noah Kenney:

Thanks.


Prev

All posts

Next


Newsletter sign up