Stop Treating AI Like an ERP Implementation - with Chris Gee
Companies keep approaching AI the way they approached every other tech rollout: install it, train on it, expect immediate returns. But AI isn't software. It's imperfect by design, doesn't follow a predictable implementation curve, and the gap between what leadership promised the board and what's actually happening is becoming a serious problem.
In this episode of The Trending Communicator, host Dan Nestle sits down with Chris Gee, founder of Chris Gee Consulting and strategic advisor to Ragan's Center for AI Strategy. Chris has survived four career reinventions driven by technological disruption—from watching his graphic design degree become obsolete the day he graduated to now helping organizations navigate the shift to agentic AI. His motto, "copilot, not autopilot," frames the entire conversation.
Chris and Dan dig into why AI adoption is stalling—because companies are treating transformation like a switch to flip rather than a capability to build. They explore the parallel to 1993's Internet boom and why the adoption curve is right on schedule despite executive frustration. The conversation gets practical: Chris shares how he built an AI agent named "Alexa Irving" for client onboarding, and they tackle whether doom-and-gloom predictions from AI CEOs are helping or hurting the people who actually need to use these tools.
Listen in and hear about...
- Why the adoption curve for AI mirrors the early Internet
- The $17 trillion argument against AI replacing all jobs (hint: someone has to buy things)
- How prompting skills aren't going away
- Building agentic AI with guardrails: Chris's "Alexa Irving" experiment
- Why "copilot, not autopilot" is more than a slogan—it's a survival strategy
- The skills gap nobody's addressing and why we need more brains who understand AI, not fewer
Notable Quotes
"My motto is copilot, not autopilot. I wholeheartedly believe that we are going to make the most progress using AI in tandem—where humans focus on the things that we do well and we use AI for the things it does better than we do." — Chris Gee [04:19]
"17 is $17 trillion—that's what the American consumer spends per year. 70 is the percentage of US GDP that represents. And zero is the amount of money that AI chatbots, LLMs, and agents have to spend." — Chris Gee [23:57]
"Your ability was never simply in your ability to string together words and phrases, but to translate experiences or emotions and create connection with other humans." — Chris Gee [36:44]
"It's not thinking and it never will be thinking. So if we understand that, then we understand it won't be thinking like a human." — Chris Gee [1:07:00]
Resources and Links
Dan Nestle
- Inquisitive Communications | Website
- The Trending Communicator | Website
- Communications Trends from Trending Communicators | Dan Nestle's Substack
- Dan Nestle | LinkedIn
Chris Gee
- Chris Gee Consulting | chrisgee.me
- Chris Gee | LinkedIn
- The Intelligent Communicator Newsletter | chrisgee.me (sign up on website)
Timestamps
0:00:00 AI Transformation: Hype vs. Reality in Communications
0:06:00 Human Touch vs. Automation in Service Jobs
0:12:40 Early Career Transformation & Adapting to Technology
0:18:00 AI Adoption Curve: Early Adopters and Laggards
0:23:30 Tech Disruption, Job Fears, and Economic Impact
0:29:10 Prompting and Obstacles to AI Adoption
0:34:45 Redefining Skill Sets & Human Value with AI
0:40:45 Efficiency, Productivity, and Creativity with AI Tools
0:46:20 Rethinking Work: Flexible Schedules & Four-Day Weeks
0:51:39 Practical AI Use Cases: Experiment and Upgrade
0:55:11 Agentic AI: Autonomous Agents and Guardrails
1:01:29 Autonomous Agents: Oversight, Guardrails, and Risks
1:08:15 AI Is Imperfect: Why Human Judgment Remains Essential
1:14:16 AI Quirks, Prompting Challenges, and Adoption Friction
1:19:41 Wrap-Up: Finding Chris Gee & Newsletter/Prompt Suggestions
1:21:18 Final Thoughts & Episode Closing
(Notes co-created by Human Dan, Claude, and Castmagic)
Learn more about your ad choices. Visit megaphone.fm/adchoices
Dan Nestle [00:00:00]:
Welcome or welcome back to the trending Communicator. I'm your host, Dan Nestle. Has the phrase AI transformation become our industry's new synergy? I mean, everybody says it, nobody defines it, and most of us are just hoping we're using it correctly. We've watched every promised revolution fizzle into evolution. Social media was going to kill pr. Content marketing would make agencies obsolete. Digital would democratize everything. But what happened? The tools changed. The fundamentals didn't. We still need to earn attention, build trust, shape perception. It's the same game, just fancier equipment. You know, I'd argue that AI is different because it's not just changing how we distribute messages, it's creating them, it's evaluating them. It's deciding which ones deserve coverage. When algorithms are making editorial decisions and AI agents are conducting their own outreach, we're not talking efficiency anymore. We're talking about ceding control of the narrative effectively to machines. The companies are already deploying autonomous AI systems. They call them agents. They're not always agents, but they don't just respond to prompts. They to actively pursue communication objectives without human oversight. The question isn't whether this is happening, it's whether communications will shape this transformation or become casualties of it. Today's guest has survived four complete career reinventions driven by technological disruption. He started as a print designer who watched his entire career evaporate overnight when digital hit. But instead of mourning, he taught himself web design, then digital strategy, then social engagement. And by the time he reached Edelman, he wasn't just running digital, he was weaving it into corporate affairs at the exact moment reputation battles moved from boardrooms to Twitter feeds. He helped Dick's Sporting Goods navigate Post Parkland Communications. He built strategies for Adidas, Citibank, Toyota. Then he walked away from big agency safety to launch his own consultancy just as generative AI emerged. You talk about timing now. Strategic advisor to Reagan's center for AI Strategy and board member of Solutions Journalism Network. While everyone else teaches prompt engineering, he's already preparing organizations for agentic AI for transformation, shall we say? What matters is he's actually implementing AI at scale in corporate communications. He's not theorizing from the sidelines. We're going to dig into what's really happening with AI transformation. What happens when AI becomes your colleague rather than your tool, and why human judgment might matter more, not less, in an automated world. Making his debut on the trending Communicator, the man who is everything I want to be when I grow up, the CEO of Chris G Consulting. Chris G. Chris.
Chris Gee [00:02:52]:
Wow. Thank you. Thank you for having me. And thank you for that amazing intro. I write one piece of that and it's better than what I could have ever gotten. I'm glad this is being recorded so I could use that rather than what I've been using.
Dan Nestle [00:03:03]:
You're welcome to use it. You'll get transcripts and actually everybody will because it'll be available in the public. But I'll tell you, I'll make a confession that I've sort of made before, but I'll make a bigger. I'll kind of make it a little more explicit. That intro, maybe, maybe 10% of it, was my original writing. I mean, I used AI just like you do. And I've just been able to build and tweak and tune and optimize my own production so that it is legitimately sounding like me. You know, the only, the only complaint I have right now is that it comes out a little too long. And you know, believe me, I shorten it before I put it out there. But, you know, we make progress as we go. I just use my own content engine to do it. You know, it's just like, why not? These are the things we have in front of us. This is the way that we can enhance and augment what we do. And I see that you agree with me, do you not, Mr. G?
Chris Gee [00:04:07]:
Wholeheartedly. I mean, I'd be a little disappointed if you didn't use AI, you know, given. Given what I do for a living. But no, I think, I think. Look, you know, my motto, as you know, is, is copilot, not autopilot.
Dan Nestle [00:04:18]:
Oh, I love that.
Chris Gee [00:04:19]:
I wholeheartedly believe that we are going to make the most progress using AI in tandem with where humans focus on the things that we do well and we use AI for the things it does better than we do. And I disagree with some of our more esteemed AI CEOs that AI can do everything better than a human because it can't, nor will it ever. I mean, I think there are a lot of people, when I get in these conversations, say, well, it's getting better and pretty soon it'll be better. No, it won't be better than everything that a human does. It'll never be human. I don't know that I'll ever sign up for a robotic massage. Right. I don't think I'll ever let a robot be the nanny to or babysitter to my four year old. Right.
Dan Nestle [00:05:07]:
Me neither.
Chris Gee [00:05:09]:
Right. And I don't want to go. I'VE been to bars and restaurants where they have those robotic waiters and bartenders. No interest. I can't argue about sports with a robotic bartender, nor do I care what it has to say about who it thinks is going to win the World Series or the Super Bowl. Right.
Dan Nestle [00:05:28]:
You know, you're talking about these kinds of, you know, service occupations and the things that I think are good because they're human. You know, you go, you go get. You go to, you go get your haircut or maybe I go get my haircut. I'm not so sure about you, Chris. But go get. You go get. You got yourself taken care of and you know, you want to, you want to be interacting with the person who is, you know, intimately involved with your scalp at any given time. You know, you don't. I've. Do you remember? I don't know if you remember because we're of an age. I mean, you remember like, remember the flow be like this like the vacuum cleaner attachment you put on to cut your hair. And you know, the, the youngins out there might not remember that. It was like this thing you stick on your vacuum with blades in it and you just kind of vacuum your head and get, and you're. And you get a great haircut completely like a, you don't get, you didn't get a great haircut from it. We used to make fun of the kids coming to school who's like, who had literally just had like the crappiest haircuts and we call them Flobies, you know, but you don't get a great haircut and there's zero humanity involved with that. Except for your mom kind of damaging your scalp. I think AI is no different. I mean, it's certainly better than a flow be, but it's, you know, you're not going to have those human interactions replaced for sure. But in our world, pr, communications, marketing, the knowledge workers, the creators, there's legit fear out there. And you and I have spoken about this before. I think we're on the same page about a lot of that is being kind of inflamed or drummed up by the tech bros in a lot of ways. Right. But you know, I definitely want to hear your thoughts on like, do we have something to worry about? But before we get there, let me just back up a little bit because some folks out there don't know who Chris G. Is. And you know, I gave a little, I gave a good intro. But can you fill in some of the gaps like you are, you've now Been out on your own since 2023. Am I right? Yeah, like, almost right after ChatGPT hit, hit the world about a year longer than me. And I've been saying to Chris ever since I met him that, you know, I want to be him when I grow up. And I. He's tired of hearing me say that. I know that's like, it's just become a stock phrase. But I want everybody out there to know, like, in that year before I went on, before I went out on my own, you know, there certainly weren't a lot of people doing this kind of stuff, but Chris really was a trailblazer in more ways than one, but certainly in our field. And the way that you've kind of landed in your business and being the thinker that you are, and of course, having the cred and the authority in transformation, digital transformation, all kind of seem to really reach, like, the perfect moment in time when you started your biz. Right. So tell us, like, a little bit about that transformation and where you got to. Where you got to. How you got to. Where you got to now.
Chris Gee [00:08:41]:
Yeah. Thank you. Well, first of all, I think that you touched on it. Transformation has been a theme throughout my career. And just like you mentioned, the day I graduated from college with a degree in graphic design, ready to take the print advertising world in New York by storm. I grew up in Philadelphia, went to college in Philadelphia, but, like, I was gonna take the print advertising world by storm. And the day I graduated, my degree was essentially obsolete. I learned how to design, and I'm a little older than I look. I learned how to do graphic design the old fashioned way with T squares, hay stubs. Yeah, exactly. You know, the little triangles, flexible curves, ruling pens, all of those things. They didn't even allow computers in our program in college. And so there are a lot of parallels to colleges saying they won't let students use AI today. You know, the professors at my university and many other universities who taught graphic design felt that's not the way you do it. You have to do it the tried and true way. And all those things, which I understood what they were saying, and there probably was some merit to their argument, but the market had spoken, and we were in the beginning of the desktop publishing revolution. And I would say that, like, when I graduated, everyone was looking for people who, coming out of college, knew how to use the computer, which I kind of did a little bit, but not really. So my first job, I lied and said. They said, do you know how to use the Mac for layout? And I said, sure, sure. And I figured I probably know more than they do. So I took what little bit I knew and I learned. I taught myself and I practiced at night when everybody went home and everything. And I got to be an expert. And so just from an early age that taught me that you either adapt or you die. And unfortunately there was this who ecosystem of people who did marker comps, who would kind of like make it, you know, make a brochure look like what it would look like before it was printed. There were production artists and all these people who helped make a thing a thing. And within two years from, you know, I would say from the beginning to the end of that transformation, a lot of those people were out on the street now exponentially. And this always happens with tech revolutions. There were exponentially more jobs and opportunities created on the back end of that transformation. But not all of those people who were disrupted initially had the opportunity or the aptitude or the training to be able to take advantage of the new world order. And we've seen it happen numerous times with other tech transformations. And I'm wary that that will happen this time. So that's a big part of what my firm does, is address the knowledge and skills and training gap. Most of what I do, despite the name of my firm being Chris g. Consulting, probably 80% of what I do is more knowledge work. It's training workshops, virtual or in person. It's increasingly later this year and next year I'm gonna be branching into digital training, digital self paced courses that people can take at their leisure, et and then I do do some consulting with companies, et cetera, to help them, starting with audits and things like that, figure out how to reasonably and responsibly integrate AI into their own organization. But part of the impetus for this was me being passionate that this transformation was going to be huge. And we are going to need as many brains as we can who really understand how to use AI, not fewer, you know, So I think that our society is going to benefit from us having more people who know how to use it. And I guess my company's doing our part to help make that happen.
Dan Nestle [00:12:37]:
There's zero in what you said that I can't relate to. You know, it's because we came up around the same time. You think about it, right? I mean, I remember and I'm going to say two words here that people under a certain age are not going to recognize it again. And I'm sorry I keep these, all these Gen X references for my listeners out there, but how about this one? Computer lab. Does anybody remember what that is? From, from, from college and from, you know, from school. Like, if I wanted to use Mac, I literally had to walk across campus to a room in the basement of like the library somewhere or the computer or even worse, have to go over where the engineering building was. Oh my goodness, that was like seven and a half miles away, you know, just to use a Mac, you know. And I think that the game kind of changed a little bit when one of the guys in my house, we had a couple of guys who were computer science people and you know, it was all, they were all like gaming, gaming machines at that point, right? It's like, oh, you have a Mac. Let's what's up for bars tale tonight? What do we, you know, what about civilization? But I had a course, I took a course in like, I think it was like, it was anthropology for idiots or something. Like, you know, I remember what it's called. It was like anthropology 101 evolution or like 701 evolution. It was like a senior year throwaway course. And in that course they asked us to model the evolution of a fruit fly using a Mac. And you know, we had the software and stuff, but, you know, I had, I had experience with computers here and there, but nothing, nothing like this. And I remember getting on there and just kind of playing around and figuring out which buttons to hit and which you know, little snippets of code I had to cut and paste into different things and well, shoot, I had a fruit fly that. Fruit flies, you can always start with a pear. And you, you, you watch how they evolve and morph and you know, it was the first kind of exposure to how you can extrapolate really kind of interesting insights from a small computer exercise. And look, I mean, very, very big diversion from, I think, our conversation here. But the point is that, you know, you see these changes in technology and you see these, well, you have no choice. You have to cope with the changes in technology. You said adapt or die. I agree, but I think that, or, and I think that there's expectations that we put on people and let's think about in the comms world in particular, right, Comms and marketing. We have expectations that these folks who are out there working and have been for their whole life are going to suddenly welcome this new amazing technology into their lives despite being told that it's a threat. And. But once they get it, once the light switch goes on, well, then you have a team of innovators. Then you have a team of you know, people with PhDs in their pockets, you do have that. You do have the PhDs in your pockets. I've been thinking though, that lately maybe we're putting too much pressure on people to be innovators and maybe that's part of the reason why there's a little resistance to, to, to adopt. Like, I think there's, there's dreamers in the world and there's visionaries and there's innovators, but the most people are followers and doers, you know, and that's, that's where I think we're falling a little bit flat. I don't know what's, what's the experience been? What's your experience been with that in your training and as you start to bring teams up to speed, how are you seeing it play out?
Chris Gee [00:16:24]:
Well, exactly that way. I think that a lot of the headlines around AI are about how companies are frustrated with these relatively slow rate of adoption. I really think that you're right. I think that a big problem is expectations. I think the expectations were out of whack in terms of how quickly adoption would happen. We all have seen the adoption curve and the first people are the early adopters. The people who had a segue before anybody knew anything about Segways or before.
Dan Nestle [00:16:57]:
They went on fire.
Chris Gee [00:16:58]:
Exactly. The people went out and got Google Glass and all of those things and they're in the metaverse and all that stuff, right? The people who as soon as something comes out, they're the first ones to try it. And then there's everyone else, then there's the next group of people, and then you have the laggards who, I don't care what the technology is, they're always gonna be last. Right. They're the ones who were the last ones that were like my mom, who they had to tell her, we're not gonna give you another rotary phone. We're not gonna support you currently. You're gonna have to get a touch tone phone and things like that. So I think that those personality types and those cohorts are always going to exist. So you're always gonna have some people like you, like me, who, soon as these AI generative AI tools came out, we were right on it. We figured out how to use it. But that's not most people. Most people are going to wait and see and most people are going to wait until sort of the dust settles and tools that have been tested and are right for mass market and make it easy for them to use AI in the most relevant use cases. Present themselves right now. If you think about it, it's tough. It's like you've got a. There's a whole slew of tools out there. Everyone has different opinions about which ones are better. You're going have to put a prompt into the chatbot. Depending on how well you prompt will dictate how good your output is. And most people don't know how to prompt, even people who are using AI every day. So it's complicated. It's almost kind of like. I liken it to where we were with the Internet in 1993, right? Oh, good point. You're gonna be spending a weekend at some relatives or something like that. But you really want to get online late at night when everybody goes to sleep. So what, you gotta go down to Barnes and Noble, you gotta go get a PC mag, you rip open the plastic, you get that CD ROM for AOL American online, right? You know, plug it into your computer, you've got mail, right? And it was complicated to get online. It was hard to find anything. There were a million search engines before it all settled, right? Remember altavista and Excite and all those things. So it took a while before Google kind of like emerged from the pack. It took a while before getting onto the Internet just became as easy as 1, 2, 3. Right now if I come over your house and I say, hey, can I get your wifi? Dude, we can practically bump iPhones, right? Yeah, you can share your password with me. I don't even have to ever see what the password is. So it took a while for. I've read anywhere from five to six or seven years for the Internet to really reach a certain level of adoption. And I think that the expectations were that AI, because it's so much more powerful, would reach a certain level of adoption sooner. And I don't think that took into account that those personality types we talked about that make up the adoption curve are still there and it still represents most people. Most people are not early adopters. So I think that's one thing. And then you touched on the other thing. I think that's really hindering adoption, which is the fear. A lot of. You know, there's been a lot of talk and a lot of predictions about AI will anywhere from by this time next year to in the next five years, just decimate entire sectors or all of the white collar jobs, or depending on some of these AI CEOs, all jobs within the next five to 10 years. So some people are like, then what's the point? What's the Point, I'm gonna be starving, I'm gonna be living in a refrigerator box, and so why should I even bother? This is all hopeless. Right. So I think that that hasn't helped. And, you know, I think that the challenge is that most of the time, and we can go back in history and find similar predictions that have gone wrong, is that those predictions are typically wrong. Right?
Dan Nestle [00:21:17]:
Yeah.
Chris Gee [00:21:17]:
Again, going back to the Internet boom, I remember, and I'm sure you do too, that the predictions back then were, this is a new economy and brick and mortar is dead. Everyone's going to buy everything online. How'd that work out? You know, I just went to a brick and mortar this morning.
Dan Nestle [00:21:35]:
You know, people like to touch stuff.
Chris Gee [00:21:37]:
Yeah. Sometimes you need to. Sometimes I didn't get my act together and I really need this particular product today. You know, I like to touch things, or I want to see a variety of things or whatever it might be, or I just feel like going into a store, I want to be around people. I'm tired of sitting in my apartment, whatever, anyone and all of those things. But I think the reality is it's always going to be somewhat more nuanced. Right. It's never all or nothing. You can, you and I are probably a five or ten minute drive away from being able to go send a telegram to someone.
Dan Nestle [00:22:10]:
That's right.
Chris Gee [00:22:11]:
More facts. So nothing ever completely goes away. I think what we're realistically going to see is there will be an initial period of disruption. There always is. Just like I mentioned about the desktop publishing boom. But I do think that there are going to be exponentially more opportunities on the other end. And then one other thing I'll say, and I brought this up when I was doing a talk recently about this and there was conversation about will AI eventually take all jobs? And I threw out three numbers and those numbers are 70. No, 17. 70 and 0. 17 is $17 trillion. Yeah. That is what the American consumer spends per year. 70 is the percentage of US GDP that that represents and 0 is the amount of money that AI, chatbots, LLMs and agents have to spend. So in all due respect to Dario Armadai and anyone else who's making these predictions that AI is going to wipe out all jobs. The last I checked, we lived in a capitalist society and I don't know anywhere else on the planet you can go for $70 trillion. Yeah, maybe there's some place, I don't know, but I don't think that that place exists on this planet. And unless and until agents and LLMs get their own money and really save up their allowance. I don't see that happening. Well, for sure, because corporations have nowhere to go to make their profits.
Dan Nestle [00:23:57]:
Yeah. And to think about it, who's. You still need to figure out at every step of the process where people play and where AI plays. You got to get that right. There's so much you have to get right before it's, before any of this really becomes something worth worrying about. And we are not like most, the vast majority of organizations, even people don't have that. All right? Like we're operating on patchworks of stuff that have been cobbled together and held together with laundry pens. The CRM systems out there are just massive bulks of data. That's half of it's bad. And you could expect to just layer AI on top of that and expect it to do well before you fix the original issues. You know, you think AI is going to come in and solve all of your accounting and, you know, cost management issues when maybe your accounting flow or your cost management kind of software sucks to begin with. There's, there's so many things process wise that have to be fixed first. And I think that's one of the other reasons why we're not seeing adoption, is because AI enhances and it augments. It can enhance and augment those cracks and fissures as well. So you can make things worse or expose things or kind of figure, why is this thing not working right? Well, then you have to dig in and fundamentally, AI is an imperfect solution to a lot of things. The capabilities and the things that you can do are out of control. I mean, it's almost unlimited. And we still don't know where it's going to go. We still don't know what you're like. I don't know what my limits are. And I refuse to believe I have limits with the stuff I'm doing. And maybe I don't, because new things, there's going to be new categories defined, new processes, new workflows, new software. Who the hell knows? The point is that you can't make a prediction based on what we know right now because what's going to change in the next five minutes might skew that entire prediction. And as long as, as long as a. Where do I want to get to? As long as AI is imperfect, then the adoption for enterprise, you know, is, I think, doomed because. Or like on a whole scale, on a whole scale level, you know, they treat, I think CTO CIOs and stuff have treated AI like a big piece of software, big kind of ERP implementation. But when you go through a software implementation, let's like, let's even go back to implementing, you know, Office, the first time it was created. There's little bugs and stuff, but, you know, with some reasonable effort, you're creating or, you know, you're able to print out that press release and save copies of it. You know, you're able to put a spreadsheet together that's going to do some math for you. Like, it happens. It works. It works perfectly. AI is not like that, in no way, shape or form. And that is hard, I think, for people to cope with from an enterprise level, you know, and I'm talking too much. But there's one other thing that you said that actually really kind of sparked a light bulb. And this is the whole idea of prompting, right? So it wasn't too long ago, I would say a year, 18 months ago, where we're hearing, oh, don't worry about prompting. It's going away. Everybody's gonna have a button. You know, you don't have to learn how to prompt. And besides, AI is natural language. You can just talk to it and it'll do whatever you want it to do.
Chris Gee [00:27:55]:
That's right.
Dan Nestle [00:27:55]:
I mean, if you want to find the best burgers, maybe. But, you know, really, the importance of getting the prompting right. I'm glad nobody told me that that would be a requirement at the beginning because it might have scared me away. Because I'm looking at the prompts that I'm putting together now and I'm thinking, who the hell is this guy? I mean, the logical flows and the, you know, the reasoning structures and the ifs and thens and, like, I nearly failed logic in college. I mean, nearly. If P, then Q. But that's what this is. And I love it. I mean, I absolutely love it. But I don't know if I would have if somebody said to me from day one, okay, what you really need to understand is the wiser framework and move on, move forward with this chain of reasoning and blah, blah, blah, blah. But that. I think all these things are obstacles to adoption. But you're trying to fight that fight. You know, I am, too, but you're trying to, like, make it realistic and transfer that knowledge to people. You know, I don't know. I think it's just been this big failure to launch, honestly, like, and CEOs have egg on their face and they gotta figure out what to do.
Chris Gee [00:29:10]:
Yeah, I agree with. You know, I think you're 100% right. There were a lot of over promise, there was a lot of over promising predictions about when we'd have AGI, artificial general intelligence, which is supposed to be last year, year, last year, this year, next year. But for those who don't know what AGI is, artificial general intelligence is supposed to be the point at which AGI surpasses human intelligence. And I think that, listen, the promise of that was that once it gets to that point, it will be able to do everything that a human could do better than a human can do it. Now, there are a lot of problems with that statement, not the least of which is, Well, first of all, none of the tech CEOs or AI CEOs agrees on exactly what AGI is. That's the first thing. The definitions vary wildly. The second thing is I think that the assumption, and maybe even the outright sell was that once we get the AGI, you can dramatically reduce your staff and also in a corresponding way significantly pump up your profits. So as a result, a lot of corporate leaders invested heavily in AI technology. I've seen anywhere from like 2 to 3 trillion dollars since beginning of 2023. And quite frankly a lot of are getting impatient and they want to see return, right? And US investors are not known as being the most patient, forward looking people they don't see past three months. So I think that that's a big part of it is something that I think realistically will result in amazing innovations, tremendous productivity gains, lots of things that we can't even imagine today. But it's just going to take time. And I think there's just a cohort of people who just simply don't want to hear that. And then coming back to, you're right, AI is not perfect. Nothing's perfect. No technology has ever been perfect, nor will it ever be perfect. So there are always going to be things that AI doesn't do. Well. AI has never lived in a physical space. And so if anybody's ever used tools like Runway, you know, Sora, right, You see that the video looks great to a certain point until like the guy's hand goes through their head, you know, you know, or like that's not how you break dance or you know, the person walking behind him is taking, their feet are moving twice as fast, but they're not walking at the same pace, at the same, you know. So yes, AI will get better, but it's never going to have lived in the physical world like we have for, in our current form, 300,000 years or whatever. Similarly with like its inability to get Hands, Right? Okay.
Dan Nestle [00:32:24]:
Yeah.
Chris Gee [00:32:24]:
And that's a telltale sign. I remember those videos that were trending a few months ago. There's people who were doing those videos, interviews over video, and they were using an AI avatar instead of doing it themselves. And I remember at one point the interviewee, the interviewer said, show me your hands. You know, because they knew that was a telltale sign. AI struggles to get hands. Right. Which makes sense. I have an art school degree. Anyone who's studied art knows that you spend more of your time focusing on drawing and sketching hands and feet. Go back and look at da Vinci's drawings. He really focused on hands and feet. And everybody we know, everyone we know has hands and feet. Maybe some exceptions or what have you, but everyone has hands and feet. So we've been looking at our hands and feet for our entire lives. We've been looking at other people's hands and feet for our entire lives and for our entire existence in humanoid form. And yet we still have to study that as artists more than almost any other body parts, because they're so intricate and we may not be able to accurately depict them, but we know when they're not right instantly. You can spot when. I mean, it's easy to spot a hand with six fingers on it, right?
Dan Nestle [00:33:37]:
Oh, yeah.
Chris Gee [00:33:38]:
Or four knuckles. So, you know, it's. So, you know, listen. And people keep the counter argument to that whenever I say it is, well, it's going to keep getting better. It's going to keep getting better. Yeah. But it's never going to be perfect. Right. So that's always going to be something that a human does better than an AI. It just is. So I think that, like, you know, the minute we start to orient our expectations to be more realistic and say, let's use AI, it does a lot of things that we don't do well. Let's use it for the things we don't do well. And you know what? It's probably always going to suck at some things that we're going to be better at. And that's okay, you know, unless you happen to be One of these CEOs who promised their board that they were going to get rid of 50% of their employees and maximize profits by the end of Q4, 20, 25.
Dan Nestle [00:34:23]:
Yeah. And that's what we're seeing a lot in the, in the, in the employment sphere, especially, you know, just thinking about this whole idea, let AI do things, you know, AI do the things that humans aren't great at. The other part of it is Let it do things that you're not great at, which makes you great at them in some ways. Like you'll find connections and you'll see things that you were never able to do before and well, for lack of a better word, create new shit. I mean, you'll be innovating. You know, somebody would told me a year ago that I would put together, you know, these really interesting web scraping tools that, that grab sources and pull them into another thing so that I can create incredible content. I'd be like, what are you smoking? Like, somebody design the app, please. And then I'll use it. But nobody's designed the app. I'm like, all right, well maybe AI can help me. And it sure. It sure as hell can. And I was talking to a real bonafide programmer coder, you know, somebody who really cut their teeth on building financial applications and then going into gaming and all these kind of really cool stuff. And boy, this person knows everything there's about coding. And they did say that vibe coding is a joke, right? Like, you shouldn't be relying on this sort of anybody can code thing to use AI to just build scalable apps or scalable solutions. But for somebody who doesn't know anything and just starts to put some stuff together at the small scale, it's revolutionary. You know, I mean, you and I have talked about this before. Like if you want to put a, you did a fitness tracker, I mean, who's that going to hurt? That's not going to hurt anybody. Do your fitness trackers. By all means, go code, do a fitness tracker. I mean, you know, I want to work with a real developer when it comes time to building an infrastructure for my, for my service that is going to make sure people's payments get the right place they need to go to and it's tracked properly and all that. Granted. But thing is, right, with all these extra capabilities, what constitutes a job and what constitutes a skill set can't be put in a box anymore.
Chris Gee [00:36:44]:
No. Well, and I think that's really. Those are the conversations that we should be having, right? What is a skill set and what's the skill set that matters? And what is it that you really do beyond your job title? In a recent workshop that I did, one of the participants came up to me and said, what I'm struggling with, Chris, is that I've always considered myself to be a great, good writer and these tools can pump out content in 30 seconds or less. And maybe it's not as good as what I can do, but it's Good enough. And so he was really having an existential crisis. Well, what does this mean for me? You know, this, this thing that made me special isn't. Does it doesn't make me special anymore. And I told him, I said, respectfully, I think you're looking at it the wrong way. You know, your ability was never simply in your ability to string together words and phrases and things like that, but to translate, you know, experiences or emotions and create connection with other humans. Your ability to collaborate with your colleagues, right. To build trust and connection really mattered, right? And then that made you a better writer. So I think once we kind of really unpacked what it was that he actually brought to the table, right? And obviously the conduit of that is and was his writing. But there were things that are special about him that AI can't and will never be able to do. So if you think about it from that perspective, don't be intimidated by AI. Don't feel like it's doing what you do, it's not doing what you do. If anything, maybe there are aspects of what, of AI that can help you do what you do faster, better, more efficiently. You know, one other thing on this topic. You know, I mentioned that I started out my career as a graphic designer and, you know, the business of being a creative is inherently inefficient. So if you're a graphic designer, you know, you worked at agencies. If I'm going to give a client, I'm going to meet with you. If you're my client and I'm getting do a design for your website or something, I'm going to give you anywhere between 3 to 5, maybe even 10 options, right? I usually try to do between 3 to 5 options. I might generate 10 options to just whittle it down to the 5 or 3. And I know that I probably want to sell you option A based on everything you told me, based on what I know about your business, if I'm good at what I'm doing, you probably should buy option A. But hopefully you're going to buy one of those options and then you're going to have revisions and we'll go back and forth and all those things, right? Yeah. So, but I've generated 10 options and I'm only going to sell one, so the other nine, it's just wasted time for me. So if I charge you for all that time, you're going to say, whoa, Chris, this is way too much money for a website. The ROI isn't there. I'm not going to make millions of dollars off of this website. So you'd balk at it. So to me, I think that there's potential for the creative community to be able to accelerate the creative process. Right. Maybe I can get some variations, show you that maybe you don't want the hero photo on your website. You've said, hey, we've got the New York skyline behind you and you want to have the London skyline instead. And I think that's a crazy stupid idea, but I've got to burn four or five hours in Photoshop just to mock that up, to show you that you were wrong and you don't want that. Maybe I can do that in a fraction of the time using Adobe Generative Fill Photo. You know, Generative Fill in Photoshop. So, so I think that there are ways that, that we can use AI to help us do what we do and maybe do it more efficiently without taking out the human touch. You know, we still have human designers. We still. Because, you know, look, if I'm redesigning my website tomorrow and hiring a designer, I know that I can go on lovable, which I love is a great tool and vibe code, a pretty decent looking website.
Dan Nestle [00:41:01]:
Yeah.
Chris Gee [00:41:01]:
But if I have something very specific in mind or I don't have something specific in my. I want them to translate for me the problem into a visual solution.
Dan Nestle [00:41:10]:
Yep.
Chris Gee [00:41:12]:
That's not, that just isn't something that.
Dan Nestle [00:41:16]:
AI does well, not, not certainly for, for when it matters. No, right. I mean, you said earlier, you talk about your friend with the, you know, AI can write copy. It's good enough. I mean, it's good. Sure. Like, why should, why should writers be wasting their time like coming up with three lines for a Google Ad or for, you know, for the latest, you know, pharmaceutical disclaimer slop. Right. Like there's. What a disaster that would be as a writer, I mean, if, you know, I'm sure you make good money, but is that how you want to contribute and be creative? That's not creative. That is just compliance. And if AI can do all that and then you can swoop in at the end and check off all the boxes and make sure everything's good, well, then you have all that other time. And there is this myth that AI frees you up to do other things. I will guarantee you the nature of work is to expand. You will not have time. It just won't. I have AI doing all kinds of stuff. It is never a problem to fill the time with whatever else I have going on. Granted, I am all over the place and I Have a lot of stuff going on. But if I was. Even if you're in corporate comms or you're a marketer, just the nature of your job. Okay. You've been copywriting. Well, AI does some of that stuff better or. Or does it good enough so that you don't have to bother with that anymore? Well, what's the better that you can do now that is going to actually create new value for your company, for your team? Bring it. Right. You have a great chance. So this whole idea that, oh yeah, AI is gonna. AI is not gonna replace you, someone who knows how to use AI is gonna replace you. I don't buy that either. Like, it's not about knowing how to use AI. I mean, all of us are going to kind of know how to use AI at some point. I think it's really about. You can be replaced at any given time for anything. If you can't create value.
Chris Gee [00:43:26]:
Yeah.
Dan Nestle [00:43:27]:
So stop thinking in terms of somebody's going to come for you. It's a victim mentality instead. Right. Like, all right, what do you do? Well, and upskill or learn. I mean learn. Just get into AI and let it take what's great about your skill and make it amazing. I don't know. I'm very optimistic about the whole thing. I really, you know, because clearly I'm bought in, as are you. But we have a lot of people out there who are just like internal. I do internal comms all day and I just write messages to such and such and such. Well, you know what I'm saying. I hate to say it. It's a good. I'm sure you do great, great work. But if the value in that is just kind of mediocre messaging that just gives. That just translates something into somebody like, I'm sorry, you have to figure out how to up the game. You said earlier, like, people get displaced.
Chris Gee [00:44:23]:
Yeah, that's right. And you're right. Listen, I think that no matter, even before AI, if you are in a field or if you're in a profession that is perceived as low value.
Dan Nestle [00:44:36]:
You.
Chris Gee [00:44:36]:
Were in danger of being disrupted. How many jobs have we seen over the past 40 years that have been shipped overseas to lower cost markets, labor markets, et cetera? Right. So that is. And it's not even just the past 40 years. The pursuit for more efficient, cheaper labor for tasks that are perceived as lower value or lower impact has been going on for quite some time. And then I want to come back to something else you said about maybe AI not giving us back more time. I think that is really up to us collectively right now. The conversation about AI in corporate America has almost exclusively been around efficiency gains, right? And it's interesting if you went and talked to a group of fifth graders and you said, hey guys, we've got this superhuman intelligence that can figure out things that we never could as humans and increase our intelligence, our iq, et cetera, what do you think we should use it for? I don't think any of those kids would say efficiency gains, efficiency gains, higher profit margins, put mommy and daddy out of work. You know, they would say help people, Right. What about homelessness? What about hunger?
Dan Nestle [00:46:04]:
Yeah.
Chris Gee [00:46:05]:
What about housing? How about mommy and daddy don't have to work as long, Right? They would say things like that. How about people can, don't have to get sick? So I think that most of the conversation has just been around the wrong things, rather than the conversation in corporate America being about simply about efficiency gains and maximizing profits. And I have no problem with profits. You either. But we could also be having a conversation about a four day work week, which most people think is crazy, but they thought it was crazy when Henry Ford talked about moving to a five day work week. And he said all the same things, profits are gonna tank, blah, blah, blah, all this kind of stuff and everything. And the opposite happened, right? All of a sudden when people didn't just have Sunday off, but had two contiguous days off, Saturday and Sunday off. Now all of a sudden you had time to have hobbies, you had time to go out and spend that money that you made, right? All of those things and everything took off, right? So what happens if we have a four day work week? What if you had three day weekends every single week, every year, Right? What might that do?
Dan Nestle [00:47:19]:
It's, it's fascinating, the whole argument around rto, return to office and flexible work, you know, full transparency. One of my clients is real, is one of the leading experts on this whole phenomenon, right. And I've been learning a lot from her, but stuff that we already kind of believed in a lot of ways. I just couldn't be nearly as data centric about it or kind of figure out where it fits in strategically to company operations as she does. She's amazing. But she said she's been kind of opening my eyes a little bit more to this whole idea of it's not about a flexible work policy, or it's not about the number of days a week you're working or everything. It's just you have to look holistically at the how, when and where Work is done. And if you're a CEO or a COO or whoever's making these policies, if you've been in a mostly remote situation or a flexible situation for a long time with your company, you can't honestly believe, and this is now me now, not her, but this is my words. You can't honestly believe that people who are working remotely and hybrid were consistently working five day work weeks. I mean, sometimes we were working seven day work weeks, but sometimes there were weeks. And I fully 100% admit it now that I'm no longer working for any of these companies. But there are, sometimes there's weeks where you're like, you know what? Maybe if I add it all up, three full days, you know, sometimes there's weeks like that.
Chris Gee [00:49:13]:
100%.
Dan Nestle [00:49:15]:
Yeah. So, I mean, we need to be very clear about what you need to do to get the work done and whether the work you're doing is the right work to be doing. And, you know, and AI fits into all this how? You know, that's like these questions have to be answered before people, before companies start to decide, hey, guess what, everybody? You now have copilot go nuts. Like, you know, you, you and I could talk about copilot till the cows come home, I'm sure. And you know, because, hey, not a fan, but if that's all you got in a company, there's great things you could do with it, right? I mean, there's certainly things. So anyway, I mean, it's a, it's a. There's no answers yet. But I think you and I are both firmly on the, on the, get fully loaded with AI, get your skills up and, you know, kind of expand your world and enhance your. Enhance and augment what you're doing team. Now, I thought that having the legendary Chris Gian on the call here on the podcast, we would geek out a little bit on, on AI. And we've certainly been talking about the big issues which I think are critically important, especially somebody like you who's inside, you know, advising and transferring knowledge where it needs to be transferred to and just understanding where the gaps are and where the literacy problems are. But now let's kind of turn a little bit. Say, how are you using AI, Chris? I mean, like, you're one of the people I look to when I want to think of, like, I wonder if I could automate this. Or I wonder if there's a. There's kind of an interesting way to come up, come. Come around this. Or do what? What tools should I be looking at? You know, you're definitely one of the go to's in my book and you know, full transparency to my, to my listeners or to our listeners here is that, you know, I've had a few conversations with Chris over time where we just sit and talk about, you know, hey, check out replit, do this, go on N8, you know, these things. But for the benefit of us all, you know, what should people be doing right now? And what, like, what are you doing with AI and you know, without painting such a wide chasm between the kind of early adopters like you and like me and the, you know, the rank and file of people, what should people be doing right now?
Chris Gee [00:51:40]:
Yeah, so two good questions. So for one, I'll answer the last one first. What should people be doing right now? Just start experimenting. I suggest that people, if they can upgrade to ChatGPT plus or one of the other paid versions, because then you can toggle off the setting that trains the model and that frees you up to put things in that you wouldn't want to put into the free version that's going to train the model.
Dan Nestle [00:52:17]:
Wait, hang on, Chris, can we just kind of nail that point a little bit? Because one of the biggest fears is AI is going to steal all my stuff, right? Simply not the case anymore.
Chris Gee [00:52:30]:
That's right.
Dan Nestle [00:52:31]:
If you know how to, if you know what settings to change. Right?
Chris Gee [00:52:33]:
That's right. So, you know, you've got to. And some of the mod, they all work a little differently. So some of them, the default is that that setting is on even if you have a paid version and you have to go find that setting and turn it off. They're a little cheeky about that. And others, it just comes when you have a plus version or a subscription version that you're paying for that it turns off. So you really have to check with your provider or whatever. But there's always a way for you to check and see if the setting for your AI tool to train the model is on. And you can turn it off if you're paying, though if you're not paying, I think there's a tacit agreement that, hey, look, we're giving you this for free. It's costing us a ton of money to run it. And as payment for that, we're going to take your data and it's going to go where it's going to go. So I would say that, excuse me, upgrade to the $20 a month if you can swing it and start experimenting. Move beyond. I think there's probably most people who are using AI are still just using it for massaging and softening their passive aggressive email, which I think is a very good use case, by the way. But I would say that you brought up when we talked about me creating a fitness tracker, I used a vibe coding tool called repl.it to basically put in, here's the type of exercises I do on a weekly basis. Here are my goals. And then I also put in, I do a meal prep delivery service called Territory Foods. So I took a snapshot of my menu and and said, why don't you create a workout for me for X number of days and create the logic so I could go in and toggle. This is what I did and this is how many reps and all that kind of stuff and then also pair it up with my menu and say what I should eat for lunch and dinner each day. And it did that. And it also has my macros and all of that kind of stuff. I didn't have to write one line of code. Now some people hearing this might say, oh my God. Well, programmers are screwed. Well, not quite so much. Yes, I was able to create it, but it wasn't instantaneously. It's not like I did one little prompt and all of a sudden this really kind of cool tool came out. I had to do a lot of troubleshooting. So yes, you can do vive coding without code. No, you will not easily be able to create the application you want if you don't know anything about code, I will tell you that much. So I think that these vibe coding tools are most powerful in the hands of who? A programmer?
Dan Nestle [00:55:12]:
Exactly.
Chris Gee [00:55:14]:
Because when you run into trouble, you're going to know where to look. And when you're trying to do things that are more advanced, you're going to know how to prompt it. So that's one thing. Try to start doing more things. Travel. Some of these agents that are out there for people to use, I would say most of these agentic marketplaces like Relevance AI and others. Some of the prefab agents that are already existing, there are a bunch for the most common use cases like travel, dining and entertainment. I know that Copilot has a marketplace and I think there's some that are in there too. There's some others. Next time you're looking to plan a trip, that doesn't mean you have to use AI to do it, but see what AI comes up with in addition to what you come up with on your own or if you're getting together with a group of people And I've done this with the. It was then called operator, now called OpenAI agent. But the OpenAI agentic tool. I remember I was getting together with a handful of friends in Manhattan. And you know, I don't have to tell you how difficult it can be sometimes to find a restaurant that fits everyone's culinary needs and also is geographically, you know, close enough and convenient to everyone from where they're coming from. So I just basically gave the agent, you know, here, all of us, here's where we all live in New York City, our boroughs, at respective boroughs or places or parts of midtown in Manhattan that we're coming from. So find me a central area. Here are the cuisines that we kind of like. And go. And it came back with a bunch of recommendations, some good ones. And I picked one and said, okay, go ahead and make a reservation there for my group of people. And it did. I had to log in, it gave me back control and I logged in, put my credentials for OpenTable, then I gave it back control and it made the reservation. So I would say try some of those things. Try to go beyond whatever it is that you're doing now and do something a little bit above and beyond and just push the boundaries. Don't necessarily think right away like, okay, how can I do this for my job? Or create this budgets or whatever it might be. Think small and then build from there. And I think that's the best way to build your confidence, your knowledge. You'll. We talked about this before. You'll learn way more about prompting, right? Oh yeah, just through trial and error and maybe a little bit of reference or courses or things like that. But I think that those are a couple of things. Just push the boundaries a little bit and do it in a low cost, low stakes manner. That's the first. And then I think your first question, how am I using AI? Well, I already named a couple of use cases, but also for my business, I use AI a couple of ways. That one that's not too advanced is obviously it's really helped and you referenced it your tool. It's really helped me with content creation. It doesn't write for me, but it helps accelerate my ability to come up with to get past that blank piece of paper. Let's say, for instance, a lot of times I'll think about content for a blog or for a LinkedIn post or something else. Of course, same with you. Probably inspiration hits me when I'm in an Uber or shower or something. A shower. Or at a party or with some friends. Or something like that. Oh, that'd be a good idea for a piece of. So I'll tap out just a few notes on my Evernote on my phone, and then when I get to my computer or whatever, or sometimes I'll do it on my chatgpt on my phone. I'll go into a chat that I've created now, a project, and I have a custom GPT too. But just keep it simple. Go into the chat that I've created for thought leadership, where I've given it all the examples of my best LinkedIn posts or my best blogs or whatever, and told it what I want to sound like. Right? Yeah. And trained it on my voice and then say, all right, here are the notes that I tapped out when I was at this party. Create a draft of a blog post or LinkedIn post in my writing style, and it will come up with a reasonably first draft. That first draft will never see the light of day. It'll be massaged by me. But now I'm not looking at a blank piece of paper, right?
Dan Nestle [00:59:30]:
Yeah.
Chris Gee [00:59:31]:
And then a little bit more advanced, I created an AI agent called Alexa Irving. Okay, AI, I get it. And she is my client, onboarding agent. Oh, terrific. Whenever I start a new engagement with a new client, I trigger her by sending an email. She has her own inbox. And so I send her an email subject line, new client, first name, last name, company name, email. And that then springs her into action. She wakes up, she then delegates to a research agent that she has, which was basically a cloned enrichment agent, who will then go onto the Internet and research and Google and look through LinkedIn and find everything about the. About Dan. And then we'll bring that information back to Alexa, and then she will then compose an email. And I programmed her to be humorous and whimsical and whatever. So she'll say, hey, Dan, great job at that keynote that you did, or whatever. Really enjoyed a couple of things that your most recent podcast with so and so. Hey, here's a link to a Google Doc containing some of the things that you and Chris are gonna talk about as you start your engagement or whatever. Let me know if you have any questions, that kind of thing. And I found that, first of all, it's very on brand. Second of all, it frees me up. It's just one extra touch point, because as agency guys, as retired agency guys, we know that you've gotta get the outreach. There's a certain cadence to the outreach or whatever when you start a new engagement. And so it frees me up. It also interfaces them with an agent. But then also I found that Alexa, there are some people who will ask her questions, follow up questions or whatever. She'll come back to me and ask me how she should answer them. And every time I explain it to her and give her the answers, she then adds them to her knowledge base. So the next time someone asks her the same question, she then drafts the answers.
Dan Nestle [01:01:30]:
Exactly. Like an intern. Exactly.
Chris Gee [01:01:32]:
Yep.
Dan Nestle [01:01:33]:
One that can handle a thousand different things at once, but still an intern.
Chris Gee [01:01:37]:
Exactly. Now, I have a lot of guardrails on Alexa, and I think this is also an important thing to point out is that I mentioned before that these agents. And let me take a step back, agents are different from LLMs, say chatbot or Claude or Gemini are wonderful, wonderful tools. But when you wake up in the morning, they haven't done any of your work. They haven't done anything for you. They don't do anything until you prompt them. Agents are different. They have autonomy. You give them a set of instructions and you give them a knowledge base, and then they then will. Whenever those conditions for their instructions are met, they will spring into action. So in the case of Alexa, it's when I send her an email, There are others that's like every morning at 8 o' clock or something like that. Right. But they're also autonomous in the sense that they make decisions in a lot of cases. So there are times that Alexa would answer a question based on something in her knowledge base. But I didn't tell her to do that. Right. But they're programmed to complete their goal or complete their mission, so she's gonna try. So if I didn't have guardrails, I have the settings set up that I have to review everything she says before or she writes before it goes to the client. Right. Of course, that's just to protect me. Smart. So I spot some of these anomalies sometimes and like, no, no, you're not supposed to say that. Then at times I'll have to go and then update her instructions. Like, don't do this unless asked. Don't do that unless I tell you to. You are not to share X, Y and Z information under any circumstances, whatever it might be. Right. So come back into prompting. You're always gonna have to know how to prompt, even with the agents, because they're gonna learn and they're gonna continue to grow, and you're gonna have to continue to put guardrails in there. So for companies who think that, that they can just get rid of humans, and use all these agents and save all this money are in for a world of hurt. I think that you're going to have to have some sort of human oversight or you risk significant reputation damage.
Dan Nestle [01:03:51]:
Yeah. And it's one thing to say we'll build some agents for essentially customer contact, customer service and building an agent for, let's say financial transactions. You know, those are very, very different things. And I think, I think there's, there's a long way to go before even, you know, if we even get, get close to that. So all those of us in comms and PR and marketing and you know, you're worried about, if anybody's worried about an agent, some suddenly coming in and, and okay, I no longer, I always pick on press releases. Who writes them anymore? I don't know who actually writes them anymore. But, but let's just for the sake of argument, we've got an agent that writes press releases. Great use of an agent, like 100% great use of an agent. However, you're not going to enable that agent to then link up with the distribution systems and then kind of distribute that press release without A, you looking at it and B, your compliance team looking at it. And you see like that has to happen. So you know, you're saving yourself a little time. But, but that agent is not gonna be independent or autonomous as far as I think the tech bros are saying that it will be. Right, it's still gonna be.
Chris Gee [01:05:06]:
Well, yeah, no, well, I mean, well look, I mean I think that the key is that they'll never be infallible. They'll get better, but they'll never be infallible and they'll never be human. It's artificial intelligence. It's not human intelligence. And I think Apple did that report, I think was it called the illusion of reason or something where they basically show that while it looks like these reasoning agents are thinking like a human, that's not really what's happening. And for folks who don't know what a reasoning agent is, all of the major LLMs now have reasoning capability and in some you can toggle to their reasoning model or reasoning agent or so. And when you ask it to do something, you can actually see it going through the different decisions in real time. Hey, I'm going to check this, I'm going to try that. Hey, well you know, this might be. No, I'm going to have to figure this out or whatever it might be. And it seemed like, wow, man, it's actually thinking like a human. Well, what the Apple research report figured out was that it's really not, it's kind of sort of guessing, you know, or relying on what it already knows. And they gave it some tests that quite frankly a toddler could pass right. At different stages of complexity. And at the lowest stage of complexity, it did about the reasoning models did about as well as the regular LLMs. For the second level of complexity, it did better. And then the third level of complexity, it crapped out. And what they found is that when you gave it questions like say for instance, maybe one of these, the old McKinsey interview questions or something like that, these brain teaser questions, if it had the answer in its data set, it nailed it, no problem. Right. Kind of like a human. Right. If I already know the, oh, I had that question before when I interviewed at Google, I got it right. But if it didn't, it wasn't up to the task. Now again, folks are gonna say they're gonna get better, they're gonna get better. But I think the key was that it's not thinking and it never will be thinking, right? So if we understand that, then we understand that it won't be thinking like a human. So when you talk to your example about press releases, automating press releases, or automating outreach to journalists, right. Well, sometimes it's gonna do its Google search or whatever. And yes, Chris G. But guess what? There are more than one Chris G. So in the outreach to that journalist, it will have facts in there about the other Chris G. Yeah. That this Chris G. Is gonna notice are incorrect. And guess what's gonna happen, right? So. So I think that having those human guardrails there is going to be critically important. The reason I say copilot, not autopilot, is because you don't want it to just take over everything. You want to be able to have it take the things that you don't get out of bed to do. I didn't not get out of bed to come to work and do this monotonous task. It's going to take that off my hands. That's great. If it can save you an hour or two, you are winning, right?
Dan Nestle [01:08:15]:
For sure. Yeah.
Chris Gee [01:08:16]:
And if all you have to do is check it, that is a good thing. And I think that what we have to do in terms of expectations is understand that that is a win. It doesn't have to be all or nothing. This thing is either gonna do all of the work or it's a failure. Yeah.
Dan Nestle [01:08:31]:
And then you leave it to crazy people like us to go, not good enough. It's not good enough. I gotta get back in that prompt. I'll tell you, I've been having this like we're in wrap up soon, I promise. But like I've been. Something you said reminded me like of why AI keeps crapping out because it's just not thinking, right? And, and we forget that it's not thinking, even though it says it's thinking. You know, like you, like in Claude, you, you put a question, you start your, your chat and then like inevitably as you're going through your chat you start to say things like, you know, like you have your big instruction and your big prompt in the front and then it does something, you don't like it and you're like, like you go, no, try again. Like just something simple like that. And the thinking goes. Trying to understand the context of the user query or trying to understand user query without context because. Right, because it's like looking at it kind of afresh and then it has to go all the way back to the beginning of your prompt to sort of understand everything. So what I, what I learned, and you've probably known this, but this is news to me. You know, I love cloud projects. Build them all the time. It's my, it's. That's where I do the bulk of my work is inside cloud projects. And if I build a cloud project really well, then I know that that's something that I can automate and do something like do some interesting things with. But it's, you know, it's a really great place and I love the output that, that CLAUDE offers as you know, as, as a writing a writer's LLM essentially. But anyway, going through Claude in a cloud project, for those of you out there who don't know you, you basically, you give it system instructions that tell. That's like, serves as your stand as your table stakes prompt for everything that you're going to do. Gives it the kind of behavior, the rules of behavior and you give it a description that gives it like tells it kind of of what its role is. So you have this kind of really expansive role and like what its job is and how it's supposed to behave. And then you upload all sorts of context like additional documents and things like this that you know, it's supposed to be able to reference really fast. Now I was under the impression that when you put in your instructions something like, hey, refer to these documents in the, in your resources before you answer a question, like I was under the impression that it would do that because that's what I told it to do. And then when it didn't do it, I'd be like, you must answer this. And it still doesn't do it all the time. And then I was like, all right, I'm a little frustrated by this. I mean, I can't get too upset because it really doesn't understand what I'm asking. It doesn't know anything. It's making some pattern match. So what's. What's the deal here? And finally I got it in my head to finally ask Claude itself. Claude, how do you work? Tell me why you're not answering what I'm asking in the cloud project in specific. Well, that's a very good question.
Chris Gee [01:11:37]:
Set.
Dan Nestle [01:11:39]:
We don't, you know, Claude doesn't always. Or I don't always answer the prompt with the resources because there's a hierarchy of the way that it accesses information. And the hierarchy starts with what does it already know, right? Like, what's already built into the LLM. So it's going to determine from your prompt what the context is, what the semantic relationships are, right? What the context is, and then it's going to choose its best possible way to satisfy your request based on the contextual relationships, the relevance between what you got in your prompt and what it knows about everything. It's not even going to look at your resources first, because if it can answer from there, that's where it's going to answer from. Oh, I thought, well, shit, I can't. That changes things. And it's like. And then if that doesn't work, you know, if we're not answering it that way, you know, then it, like, searches the web or something, like it gets to the resources sort of last. So it told me that what I really need to do is really make sure that the instructions tell it, like, here's where you go first. And keep in mind that as you chat with Claude, and the same thing goes for other LLMs too, and the chat gets longer and longer and longer, it will always still prioritize what it knows. Sometimes it forgets kind of, you know, where you like, if you're. If you start off the chat and it's all ready to go, it has understood all of your documents, it knows your protocols, it knows your frameworks, knows your templates, it's going to do exactly what you want it to do, you know, six or seven iterations in, where's the template? What happened? So you got to stop and you got to remind it, right? So this is the way it's like dealing with A six year old with serious short term memory and ADHD that knows everything but stay on track. And I just learned that the other day and I was like, okay, that opens up a lot of things. So now I have, well, that create. That creates a whole nother layer of stuff for me to tinker with. But it is never perfect. That's. And the prompting is critical. So, you know, I'm doing more prompting training these days or showing people how to prompt. And meta prompting is like the savior to everything where you ask the AI how to build a prompt based on the prompt that you want. And you know, it's just been a great education and amazing myself with where I'm going with this, but also thinking, no wonder people aren't adopting, you know.
Chris Gee [01:14:16]:
Yeah, well, it's, you know, like we said before, it's complicated. You know, I think for those who are early adopters or those who have, are comfortable with technology and are comfortable with the messiness, it doesn't seem quite as complicated, but I think for most people it just is extremely. It's almost like sitting in an airplane cockpit.
Dan Nestle [01:14:39]:
Yeah.
Chris Gee [01:14:40]:
So. And you're right, I mean, there are all these little quirks about how AI works. And then I think to your point, when people encounter some of them, they just throw their hands up and say, okay, this thing's a bunch of garbage.
Dan Nestle [01:14:52]:
Yeah, they don't know what to do.
Chris Gee [01:14:53]:
Right, right. You know, because the expectation is that it's supposed to be infallible and it's not. Right. There was just an article I read the other day about judges are having to grapple with. There is just an influx of attorneys coming to the court with cases in their briefs and their motions that were hallucinated by AI. You know, it's insane, you know, and I use full disclosure. I have a project in ChatGPT that I use for legal work, for agreements and documents, and it's wonderful for that because why? It has millions of those exact same types of things in its data set. So if you ask it for something and it has millions of those to refer to, it's going to do a really good job of it. Now, if I were in litigation, if a client wanted to sue me or someone else wanted to sue me, am I gonna just like stroll up? No. You know, hey, Mr. G, are you represented? Do you have representation? No, I got chatgpt right here. I'm good. No, not on your life. And I don't care what the AI CEOs say, it will never be that good. I'm sorry.
Dan Nestle [01:16:07]:
I agree with you. And the nuances and the insights and the connections that you can draw between, especially for lawyers, case law, things like this, you know, jurisprudence, blah, blah, blah. Like I use AI for legal the same way you do. I have a bit, I have my inquisitive business buddy in set up as a cloud project. It is loaded up with all my corporate information and it does proposals, you know, it does, it checks contracts for me. It's, I trust it for that. It's perfectly fine. Perfectly fine, fine. If I'm ever sued or if there's litigation, same as you, man, I'm going straight to a lawyer. This is, nope, can't do it.
Chris Gee [01:16:43]:
And I think that's, that's really the key, right? That's really. And I think that if we look at past tech innovations, that's where it really netted out. You know, let's go back 10 years ago when robo advisors really became very, you know, Right. Oh, this is gonna be the death knell for financial advisors. Guess what? They're still kicking, right?
Dan Nestle [01:17:03]:
Absolutely.
Chris Gee [01:17:03]:
I think what's happened though is that there are certain people, whether it's generation, personality type, whatever, who probably would never have gotten a financial advisor, but might have just decided to day transfer or do things their own or just penny, they will use a robo advisor, but that they, you know, and then there are some people who want a hybrid. Hey, I've got this part of my portfolio that I pay a financial advisor for, but I want to tinker, I want to play around, you know, with some other things using a robo advisor or whatever. But I don't think that there's more control or just maybe just try and playing around and seeing what they can do on their own. But I think that it was never realistic that people were going to rely on a critical mass. There are people who will do it, but a critical mass of people were going to rely on a robo advisor or robot or an AI to control all their wealth.
Dan Nestle [01:17:57]:
Yeah, a lot of this stuff is also, and I know we got to wrap up in a minute, but a lot of this stuff is also like, like almost Darwinian in a way where, you know, okay, we can replace the bottom level skills that people may not have paid for before and make them pay for it. Right. With robo, like robo advisors. Right. So therefore all the FAS who are out there operating at that level are justifiably scared or were justifiably scared. And so they had to up their game, they had to lean into their Relationships. They wanted to keep their job justifiably. So I think same thing's gonna happen across AI Just like we said before, the mediocre copywriter who just doesn't know anything more than the company script and is not willing to be creative or color outside the lines or just. Or question what's going on. AI can do all that, no problem, you know. But Chris, I am astounded that it's already been an hour and 18 minutes, which is a very relatively long show, because we can keep going. I so wanted to talk to you about GEO and things like this, but we'll hold that to another time. There's a lot going on here. Before you go, everybody out there, if you want to find Chris, go to ChrisG me and his name will be spelled properly in the episode title and the graphic. But it's not hard. Chris, G, E, E. Go for it. Find him on LinkedIn and his website. Like I said, Chrisge, me anything else, Chris, Anyplace else where you are actively findable or searchable, discoverable.
Chris Gee [01:19:42]:
That's pretty much where they can find me. I'm branching out. I'm starting to. For 2026, I want to take my YouTube channel a little bit more seriously. So I'll soon be having links to that on my social channel, on LinkedIn and also on my website and everything. But right now, my main playgrounds are my website and also my LinkedIn and also I would say. And you can find this on my website, my newsletter. Every Saturday, I do a prompt of the week. So if folks want to get better at prompting, it's free. All you have to do is sign up on my website and like I said, every Saturday morning, delivered to your inbox is a new, hopefully useful prompt.
Dan Nestle [01:20:22]:
Prompt? Yeah. And you always say the expert mode or, you know, whatever, like give people, give people the level of difficulty. Chris, it has been a pleasure. I think anybody who's heard this who's. Well, thank you everybody for being around for an hour and 20 minutes, first of all. But second of all, anybody who's listening to this really, I think, gained a lot of useful information, understanding of where we're at right now. And I was really happy, I think, just as the host of the show, to kind of recenter the trending communicator on a real trend in communications. I mean, not to say that my previous guests had not been talking about trends, but I wanted to come back to AI in a bigger way and I'm glad that you made that possible for me. So thank you as always, it is a privilege and an honor to call you a peer, a colleague, a friend, a and someone I admire. And now a guest.
Chris Gee [01:21:18]:
So gotta come back Anytime my friend. Anytime. Thanks for having me.
Dan Nestle [01:21:24]:
You got it. Thanks for taking the time to listen in on today's conversation. If you enjoyed it, please be sure to subscribe through the podcast player of your choice. Share with your friends and colleagues and leave me a review. Five stars would be preferred, but it's up to you. Do you have ideas for future guests or you want to be on the show? Let me know@dantrendingcommunicator.com thanks again for listening to the trending Communicator.