AI Overload - February 2025 Episode Transcription


Audio links for original podcast:

·       Here, on the dedicated Watch This Space website

·       Here, on the Podcast section of my J Arnold & Associates website


Transcription: AI Overload

Jon Arnold

Hey, welcome to Watch This Space, the podcast about future of work. Every month, we bring you insider perspectives on how digital transformation, emerging tech, and generational change are shaping the future of work. We are two analog guys finding the groove for all of this in today's digital world. I'm Jon Arnold, and these trends are my focus as an independent technology analyst in my company, J Arnold and Associates.

Chris Fine

And I'm Chris Fine. I'm an independent consultant and strategist specializing in workplace technology, IoT, and security. My company is Integrative Technologies. Hi, everyone. Hi, Jon. Welcome to another episode of Watch This Space. How are things going with you?

Jon Arnold

Well, they're okay. I've been away most of the month through January, or at least out of commission for most of the month of January. And as you can probably tell from my voice, yes, I've been, I'm at the tail end of a pretty crappy head cold, chest cold, and I will go today as long as my voice will let me. So, Chris, for those in the know of technology, all the wonderful things that AI can do, is there any app in your toolkit that can scrub this out and make my voice sound normal?

Chris Fine

I haven't found one yet. It may be out there. I usually hear about the latest things from the kids or whatever, yeah. I'll have to ask them. I wouldn't be surprised if there's an app out there to try to remind you to take your medicine or whatever, right? But I don't think there's one that could reach in and actually make you better. I'm sorry you've been suffering, but hopefully it's getting better.

Jon Arnold

Well, it could be good use of deep fake type of technology, I suppose, to, you know, synthesize my voice somehow or make it sound like somebody else's. But that's not where we want to go. How are you doing?

Chris Fine

Yeah. Yeah. I've been busy about to about to move. My whole abode is just a mess and I can't find anything. And it's just startling when you realize how much stuff you have, how much you've accumulated. Other than that, things are moving along. Everything's more or less on schedule and doing the work out there in the industry with the smart building, smart workplace stuff. So I can't complain.

Jon Arnold

Yeah, good, Well, the joys of podcasting, folks, yes, you cannot see how disruptive Chris's world must look right now. And you can't see what it looks like for me to have the end pieces of a cold. I'm not at my best, but I will try to sound my best and we will get through this episode. We're in our eighth season, if you don't know that, folks, and we continue along here. And never short of storylines to talk about. I think we have a few themes to get through today, Chris. One, I think, very topical and one just more about the general flow of what happens in our work lives. So I will push this back to you, and then we will get these conversations going.

Chris Fine

Okay. Well, I think we were going to talk a little bit about DeepSeek, but I had a comment on your remark about Deepfake. I've been watching a series on Netflix called Churchill at War, which is actually very good. One of the things they've done is that they have added excerpts from his six-volume excellent series, The History of World War II, which is a lot to read, but if you're into that period of history, it's an amazing resource.

They have him reading it in his voice, but it's not really him, so they're adding effectively Churchill recordings that never existed and it does a pretty good job. That was just another instance where you're watching something that should be fairly well-known production type work. Or if somebody was imitating him, it would be common to have a human being, an actor imitating Churchill's voice, which of course many do. But this is actually a machine, so I thought that was interesting.

Jon Arnold

Yeah. What a great can of worms, folks. You know, what comes to mind right away is the Anthony Bourdain film, right? That was all synthetically created from, you know, using samples from his real voice to create the narrative for the film, you know, but it was all AI based stuff. Even when you're being transparent about it, like when they do these posthumous voiceovers with artists, what was it, Nat King Cole and Natalie Cole, right, doing these quote unquote duets, right? And there's been others like that as well, right, where they can just use the voice and somehow either it's the real voice or the artificially created one and meld it in with current things. And before you know it, you really can't tell the difference. And frankly, for the fans, do they really care?

Chris Fine

Well, I think that Nat and Natalie were a little different because those were tracks that he made that she could, she could duet with.

Jon Arnold

Yeah.

Chris Fine

But that was him.

Jon Arnold

Yeah, that's the real, that's an example of the real. But then when you get the fake stuff, but it's all AI generated, you make it sound like, oh, the person's right there in the room with you. And, but as I say, did the fans really care?

Chris Fine

I doubt it. What do you think?

Jon Arnold

I agree. Like, I agree that you give the people what they want, so to speak. And, but you heard Paul McCartney rail against this stuff recently too, right? With the, even like the songwriters these days have no control over their creation. Right? It's just, it's the streaming services take over everything. And you're just like one small cog in that whole kind of ecosystem. Whereas, you know, the music traditional music publishing model ensured that the creators, you know, got their fair cut.

Chris Fine

Yeah, there was another interesting music related nugget. There's an interesting article, if anybody's able to get the New Yorker online or by Apple News or however you do, or if you subscribe, there's a little article in there about Spotify. Because there's a new book about the history of Spotify. And, you know, I'm a Spotify subscriber so no knock on Spotify but I will say it's kind of interesting to learn about how the Spotify playlists fill themselves out and apparently the way they do is that there is a sub group of artists who nobody really knows but who produce all of this kind of innocuous listen in the background kind of music for Spotify, and they make a fortune, some of them.

For example, if you sign up for a playlist called like Soothing Piano, there's going to be a portion that's actual, you know, traditional music that was recorded for whatever reason, but happens to fall into the category of soothing piano, right? Like, let's say there's a Chopin Nocturne that was recorded because it was an album of Chopin, but it happens to be soothing. So they put that in, but then they start going, you can run these things all day, and people do. It starts to go into this kind of quasi synthetic content that was manufactured just for this.

What's going on right now is there's a whole new bunch of people who want to get into this field who are essentially generating all this with AI. It's not really music in the traditional sense, but it's sort of modern-day music for Spotify. They are real people. But now there's another generation coming, which where it's all AI generated. I thought that was interesting, too.

Jon Arnold

Oh, for sure. This is a whole other episode, folks. But, you know, this is a generational thing, right? As we say in the intro, our analog world - our radar is very good about stuff like this, and we have lots of defense mechanisms to push back against it. But for the digital generation, especially five, ten years on from now, where this is all they're going to have ever known, who's the wiser? And so for us, we just see the bar is getting lower and lower for the kind of things that we thought highly of in our day. The world ecosystem for those things is different now, and you really wonder if any of it could be recreated again ever at all.

There was another one of those flashbacks in the paper today about the, it was a Band-Aid in ‘85. They talked about, you know, doing these mega productions, we are the world, to raise money for famine in Africa and things like this. And getting all of those pop stars together in one room and to create something new and something that the fans would love was a pretty good, honest effort. I would just really wonder, could that even happen today?

Chris Fine

Oh, I think it could. I just don't think they'd actually get together.

Jon Arnold

Yeah.

Chris Fine

If you look, there's a YouTube channel, for example, called Playing for Change, which is actually very good. It's all work done in the spirit of, you know, against hunger and other good causes in the world. And they have many famous musicians who participate in that. But they also have a more or less consistent pool of musicians around the world that they can pull to become part of whatever the song is. They just take the money they make off YouTube and give it to the charities. You're not going to get everybody in the studio with Quincy Jones, you know, RIP. But you might very well see an even easier way to produce these if people do get behind it would be my point of view.

Jon Arnold

Yeah, would love to see something like that. It takes you back to, you know, Concert for Bangladesh. I think that's the ground zero for all of this stuff. And I think we've talked about this too, having watched that movie recently a couple of times. It’s more innocent times, but the genuine effort that went into that for a true cause and the way people rallied to support it and put on a show that has never happened before, right? That was a really groundbreaking thing and just took just one guy to do it, just like with Bob Geldof to make it happen. But, you know, that was probably the best example of the power of music for social good that I can ever think of.

Chris Fine

Certainly one of the ground breakers. But you know what? If we wax nostalgic here, no pun intended, we're going to lose this entire podcast.

Jon Arnold

All right, let's get right into the present.

Chris Fine

All right, since we know we have to cover this, why don't we get an update on how Future of Work is going? Your conference, it's coming up.

Jon Arnold

Okay.

Chris Fine

Yes.

Future of Work Expo Preview

Jon Arnold

All right. That's very current. By the time folks listen to this podcast, you'll be making your final travel plans for Fort Lauderdale. So, Future of Work Expo is in Fort Lauderdale, February 12th to 13th, which is a Tuesday and Wednesday are the days for our sessions. We've been talking about this off and on here, of course, for quite some time. It's an annual event, so it just happens once on the calendar. And if you don't know, you can see from the thumb banner we use for this podcast that there's a logo on there for TMC.

TMCnet is a media sponsor for our podcast, and they are the kind of the big tent under which the Future of Work Expo takes place. Their mothership show is called IT Expo. It's been running for 20 plus years. It's probably one of the most established tech shows around and they have a very loyal base. It’s in South Florida, and they run a series of sub-events connected to the IT Expo, and my Future of Work is one of those. So, I've been kind of embedded in that ecosystem for quite some time.

As you know, Chris, we've been doing this for a while, and we've got a pretty loyal following, both of attendees and people who compose our speaker roster. Several of our regulars will gladly come back and be part of the experience, which is largely defined by a series of panel sessions that run those two days where we have moderated discussions. So, no presentations, no slideware, and we talk about various aspects of what the future work concept is kind of getting behind.

I realize it's a fairly broad generic term for better or worse, but we've been tracking the space, you and I for years, Chris, both at that event and here, of course, here on our podcast. It's one of the pillars of what we try to talk about. And yeah, so the state of Future of Work, 2025 edition, a lot of similar themes to what we've done in the past, but you can't get away from the predominance of how AI is kind of, I wouldn't say like an anvil falling on top of everything, but it really has that feeling if you're not talking about AI, then you're not talking about future of work.

Chris Fine

Certainly any aspect of the future of work, you have to factor that in, right?

Jon Arnold

Oh, totally, yeah. Again, this is one of those for better or worse ideas because people can't seem to embrace AI fast enough, again, for better or worse. But the idea here is that the promise that it brings for not just automation and efficiency, but this bigger term of knowledge management is becoming a big deal in the workplace now. And of course, by workplace, we're largely talking about, office-based environment, knowledge worker kind of thing. But really, it does apply in a lot of ways to, the remote settings, the mobile workforce, people who work, in campuses and retail and hospitality who aren't in an office or at a desk.

But the common thread for all of those environments is that AI is finally being able to not just generate, but capture and process vast amounts of data that we never really could do anything with before. And what I mean by that, Chris, as you know, is that text has long been a digital medium. Voice in some ways has been a digital medium, but it has never been really captured. And video is kind of catching up. But when you have all the different ways that we communicate, when they can all be in a digital framework where all the inputs can be captured, now you create the tsunami of information. It's really just raw data, but the content of our conversations like we're doing right now, Chris, It's just words, it's just sentences and ideas, but AI now has the ability to attack that, so to speak, process it, treat what we would think of as unstructured information, unstructured data, and pull relevant things out of that apply to whatever the inquiry is about, right?

In other words, if you are, let's say you're a frontline worker and you're, maybe you might be a clinician and you're about to see a patient, and you don't know who this patient, you want to get a kind of a quick primer on their whole healthcare history, their family history. Well, AI can help kind of pinpoint nuggets of data from across these spectrums that it's able to capture and give you a pretty good snapshot that makes it a much more informed conversation. I think anybody in any line of work can see the benefit of that. Now, there's risks and downsides, but I'll stop at that point now, Chris, because to me, that's the essence of where AI really comes into reshaping future of work.

Taking a Closer Look at AI

Chris Fine

I agree with you, actually. Let's just say for a moment that you just looked at AI as a technology, as a tool, right? And you said, what would be the next generation impact of AI versus like the last generation? So, one of the one of the greatest breakthroughs of the last generation of work-related technology was the ability to organize and manage and derive insights from vast amounts of structured information. When you started to get relational databases in the 1970s and 80s. Companies like Oracle, you know, it was gigantic to this day. Going back to IBM, you know, Sybase and then Microsoft SQL, SQL Server and all of that. Plus tools like Excel, where as long as you had a structure to the data, you know, that's like a front-end programming language that could take input from all these repositories of data.

That went further into statistical processing and everything. But the big sort of thing that hung out there as an unsolved problem was unstructured data. And this was a huge issue even in my days as an IT person, right? What do you do with unstructured information? If you can apply a tool like AI to really manage and harvest from unstructured information, that really is a big leap. If you neglect the social and economic implications of AI and you just look at it as an office tool, it's a big, gigantic impact. It's unstructured information and being able to draw conclusions from vast wells of information that nobody with traditional tools could ever plumb. That's kind of how I see it anyway.

Jon Arnold

Exactly. So if we say, okay, I trust what AI is doing here - and I'm not even in a position to question the provenance of how it did what it did or how it drew its conclusions. If you accept it, like with humans, when we communicate, there's an implicit level of trust that I know something about you and I can believe what you're saying, or I take it with a grain of salt. That's our human judgment at play.

Well, we don't really have that with AI yet. If we implicitly trust what's there, you're kind of assuming that AI is, as you say, as a technology tool, that it's objective, it's colorblind, all that kind of stuff. But as we know, at the heart of it, garbage in, garbage out, AI only produces outputs based on inputs that humans create for it. At least now. It's going to change down the road, but when we start building in our own implicit biases, maybe covert or overt, whatever, they've worked their way into the data. That's when you start getting into all kinds of issues. You just name it. Any cultural vector you could think of creates problems. Also, of course, determining what's truthful or not and what are your sources to determine that.

There's a lot of gray there, that if you take AI at face value, you're not going to get past, or you won't even realize that it's being actually more selective than you think. The other side of the coin that we have to keep in mind is that as the humans who are still supposedly in charge, we haven't yielded everything to AI yet. But you can see how this can become a very comfortable relationship where, geez, look how easy it is to get things done now, where you might ultimately have to compromise a few things in terms of how truthful or accurate or representative those inputs really are.

Chris Fine

Yeah, and it makes it more difficult because it's hard to know all the assumptions. I was thinking of an analogy in the structured world. Let's say you're trying to do an evaluation of something on a spreadsheet, and you want to come up with what the thing is worth right now. Like, what are you going to pay for it? What's the range? And there's a lot of assumptions that go into that. If you're a disciplined analyst of this, you would go back and study the model really carefully. And hopefully it would have been created in a way, and let's say Excel, where you would be able to see what those assumptions were fairly easily.

And if it wasn't obvious, then whoever made the model for you, and if you're the decision maker, you're going to send them back and say, you have to lay out for me all the assumptions in this model, right? What's the discount rate? Just what's the cap rate? What's, you know, I could go on and on with the gibberish. My point about AI is it's really hard to understand because there's so much unstructured information and so much inference going on. You know, how do you flag where the assumptions are, right? Like, are you just going to rely on the output and just say, hey, that's biased? Or are you? How do you do an audit on it, I guess is what I'm saying. Does that make sense?

Jon Arnold

Yeah. Gatekeepers, guardrails, all of this stuff. There's so much to be determined or things that haven't been factored into the future of work equation yet. These are all threads that we're going to try to talk about on the sessions. An important aspect of that too, when we talk about, well, what does the future work look like with AI? Of course, certain things go to a logical conclusion that if you can automate tasks and if you can use Agentic AI to have one or possibly many virtual assistants under your control that you send out there as bots to do your bidding, then there may be less and less. work for you to do.

So, there is a risk of automating yourself out of a job, which is always a possibility with this stuff. And of course, management is always going to favor that which reduces costs of their overall operation. For them, the charm of AI is very much about cost reduction, automation, as opposed to enriching the workforce's skill set. We can get down that path a little bit later. I think the main idea here is that the AI piece, we have to look at it more holistically about where it will be a positive impact on future of work.

Of course, if AI is going to be the centerpiece of so much of what we're going to be doing, what are the skill sets, right, that workers are missing, that they're going to need? They're going to have to become more AI savvy because ultimately, a big part of their jobs are going to shift to becoming more like orchestrators of AI tools to manage their workflows as opposed to doing the things that we've normally done on our jobs about thinking and decision making and sharing and mentoring and, you know, working in teams.

The very essence of work is going to become kind of filtered through a lot of these AI capabilities, but the workers themselves are going to need different skill sets, as will their managers. And frankly, as will the C-suite decision makers who are backing AI big time - they have to understand the ripple effect that this is going to have on the workforce itself. Unless, of course, their plan is to ultimately replace everybody. But I don't think that's a very realistic scenario.

The Adaptive Nature of AI

Chris Fine 

Don't you think that's going to happen, though? I mean, again, I'm kind of looking at precedent. And of course, precedent can always be broken. But you know, a lot of people ended up having to learn the last generation of tools. That wasn't all an automation of manual processes. There were old breakthroughs in what they were able to do just with whatever capabilities were of the system site. I think that does come. The question is, where's the displacement? As any new technology does, but where does it happen? You just don't know in advance. You don't know what the impact on the day-to-day work content and future of the workforce. But I guess the only thing that we could argue right now is that it is going to be significant, don't you think?

Jon Arnold

I do, yeah. I just think it's the scale of the speed at which it's happening. I'm glad you brought up earlier iterations of technology. When the PC came around in the early 80s and we started to learn about things like, you know, WordPerfect and the PowerPoint type programs and Excel type of programs, they were all these new capabilities, Lotus, right, that we never had before. Well, we didn't have the skills. We didn't have any background in these things. It was all new. So I think everybody kind of learned on the fly. I think to this day, it's fair to say that most people who use these tools only are familiar with or use a small fraction of the capabilities because we don't get tech literacy taught as a subject in school.

When we enter the workforce, we're missing the skills that we ultimately were going to need to be successful, we kind of have to learn them on the fly. But with AI, I don't know if that's going to be good enough. I don't think people are going to have the luxury of 5, 10, 15 years to kind of figure out AI as they go. I think they were able to do that with PC, and learned to get enough of a skill set to be effective in their jobs. But the technology has always remained far ahead of the limited use that we have for these tools.

Chris Fine

One difference about AI, though, is that you can easily see how it would very readily adapt to the user. If you have a PC or Excel or Word or whatever, they get progressively easier to use on different generations, subject to the limits of the paradigms around their design and the capabilities of the systems, but they don't treat one user differently from another, if you get what I'm saying, right? PowerPoint's not going to care if, and I know, you know, Microsoft might argue that they're really trying to make this different and have it be more aware of the user and the user's capabilities. But fundamentally, it's a program that runs a certain thing. It does a certain thing.

But you could argue that as AI advances, just as you or I would do if we were facing off against a colleague at work who-- and our colleagues are all different. They have different perspectives. They have different levels of skill, sets of skills, backgrounds, training, objectives, agendas, job responsibilities. I think the AI gets better and better at determining who it's dealing with. I'm not sure that's entirely a great thing, but I think that when you talk about learning and training, I think that more than past generations of technology, AI will just adapt to who it's dealing with. You get what I'm saying there?

Jon Arnold

For sure. As we will have to adapt to it, because I think one of the big things that makes AI different is this ability to learn and improve continuously, which we did not have in the workforce before. As I've always said, the thing with AI is when you train it, it never makes the same mistake a second time. It will keep getting better and better, and as you say, become more user-friendly as we go. That also means as its knowledge base expands, it's going to be harder and harder for the workers to actually contribute something new to what AI doesn't already do. This kind of touches a bit on like what we talked about last episode, Chris, with The Maniac.

Chris Fine

Yes, The Maniac. Yeah.

Jon Arnold

Right. It's the same thing. Like, what's the point if all I'm going to be doing is training my successor? And that raises the issue, of course, a lot of false positives that you do just to derail it because it's going to take your job away in six months. That's all getting kind of bleak. The main point I think for our listeners is that with AI, there's a lot of sides to this. It's not even a coin. It's almost like a hexagon or something. It has multiple sides and they're all different and we don't really understand it very well yet. Nobody really does, but it's becoming more and more the center point of all the future of work kind of initiatives.

When we talk through our program, whether it's about cybersecurity, job skills that we're going to be needing, the impact of AI on the role of IT, right? These kinds of things, how we're going to be collaborating in the future, how workspaces are evolving, right? These are all tangents that are all these multiple sides of that AI hexagon or whatever you want to call it, but they're all different and they're all going to flow through AI. So that's kind of the gist of what we're going to be doing at the event. So sign up, folks. Come on and join us. We have a few speaking spots left, not many, but yeah, it'll be a continuation of what we've been doing for quite a while.

Chris Fine

Well, it sounds like a wonderful program, and I hate to miss it. It's the one time that I am. It's all related to the move. But if I were able to be there, I would certainly be there and would urge anyone else to try to attend as well. Now, do you want to talk briefly about DeepSeek, because that was kind of interesting what happened this week with the suddenly available, seemingly very cheaply produced Chinese AI. What did you think?

Jon Arnold

Well, that was the big, that was the newsworthy item I thought we were going to start earlier when we talked about future of work. But it's all connected because it's a very good example of what we were just talking about with this whole new universe with AI - how things come quickly and their impact is at a much greater scale than we've ever seen before, it sure obviously took all the big tech stocks down a few pegs, just really out of fear. I think that all of a sudden, if their cost models become obsolete overnight, well, they've got a big problem on their hands.

You really have to, I think, look at this at a couple of levels. I don't think we're, either of us are data scientists enough to deconstruct what they're actually doing in terms of how valid it really is. But the point is that at face value, and we talked about this ahead of the podcast today, what we're really talking about here is, like with any other technology innovation, if you can find not just a better way to do things, but if you can do it on orders of magnitude less cost, you're going to win.

That's the scare that this brought into the whole thing. Not just the way that the Open AIs of the world talk on the scale that they feel is necessary for AI to bring value, that everything they talk about is in the billions or 10s or hundreds of billions of dollars. And you can also see where the current administration is going with betting heavily on AI on the scale that they're talking. It's clearly a big kind of invested belief system that this model is going to be the best way to do things. And if you undermine that in any way with something that's credible, then that is extremely disruptive, as we saw in the past few days, right?

Chris Fine

I think people will sort out how legit and how capable all of these things are. But I feel like from market disruption point of view, it sends a signal that you're entering the next generation of the technology, which essentially exploits or leverages the work done by the very expensive first generation. So If you think about, and we were talking about this before, you know, the internet and PCs and everything else we have today didn't spring out of whole cloth just from the investment made to create the companies that most capitalized on it, like the second generation of companies or third generation.

This was all built on work that was done at government taxpayer expense and some big companies like IBM, which again, had a huge involvement with the government, really from the end of World War II up through, say, the 80s. There was an enormous investment in creating all the foundations of what we now use. And that first generation of companies, a lot of them didn't keep innovating, and ultimately faced the innovator's dilemma. They didn't transition into the second generation of success. But then there was that second generation that was able effectively for free to leverage all that vast amount of knowledge and expertise and innovation that had come out of that big investment. And so I think when people look at this, maybe they say, maybe it's not these guys, maybe it is, but there's been this huge investment in sorting and filtering and adding AI and tagging and everything involved as vast repositories of data.

Now, who comes along and can just grab that, leverage it, that investment? And then how long do the original high-level investments pay off? You see what I mean? Like, if you take the example of communications networks, the Bell System, all of the work that was done on that, when did it stop making sense to support that from an investment point of view? And when did the payoff start to decrease? Because more and more of the value was being poached off by the successor companies that were leveraging all that work. It's the natural flow of things. And so I feel like that was one of the signals here.

Is DeepSeek Innovation or Disruption?

Jon Arnold

For sure. It's a bit like the difference between innovation and invention because DeepSeek is not creating something new, but they're finding arguably a better way, but more importantly, a much more cost-effective way of doing these things. And that's the big one, because when I look at AI as an analyst, one of the things I constantly talk about is that the cost is borne by the enterprises who are investing in these tools. I would say almost blindly that they're spending huge amounts of money on these LLMs.

We've talked about this before, but they're arguably not getting a great return on this yet because the ongoing cost of supporting these models and feeding them and fine-tuning them is ongoing and it gets more expensive all the time. The hyperscalers do very well with this because they have the kind of underlying raw data capability to do all this. But the enterprises are paying for these services, and it's up to them to generate the value from it. And if they are not getting what they feel is an acceptable ROI of some kind, this experiment with AI could lose traction, could lose financial support, and then you have a different problem because now it's not this open-ended thing that just the more you spend, the more you get.

That high cost first generation model has a lifespan. It's not sustainable that way unless the benefits are fantastically great, but we're not seeing that yet. So, if you could do it at a fraction of a cost, you're going to get the attention of the enterprise decision makers very quickly because they don't, it's not their mission to subsidize these first generation players in the hope that they're going to get the net result, because they won't. They have no reason to continue, as you say, if the decreasing returns start to become apparent, why would you keep investing in that model when I can do it for a fraction of the cost? Now I will have some compromises along the way. So those are decisions that have to kind of be part of it. And that's kind of the way we look at it with our analyst mindsets, right?

Chris Fine

Yeah, it's one of the things that's interestingly different about watching this play out as opposed to earlier comparables was even the high-cost AIs are so widely available, because of the subscription model and the use of the internet and cloud-based operations, right? If you think about the earlier analogy of computers and you think, one of the reasons that PCs were developed originally was because people wanted a computer in their home or to be able to get their hands on one, you could really invest a finite amount of money and start using a computer. There were the enthusiasts at the beginning, but then businesses. But it was true that, you know, and it was really expensive to get your hand on the prior generation of technology, just even as a user.

Now what I'm wondering is if there is a step function down in cost, like what further innovation does that enable? You know, how similar is it to what happened with PCs? Because you can get your hands on generative AI now pretty cheaply as an end user. You see what I mean? The fact that I could run this new thing on a Raspberry Pie, which there's a very interesting YouTube about, doesn't mean that, is it going to unleash a whole new flood of innovation because it's so much cheaper? I mean, usually that's what happens and it'll be interesting to see.

Jon Arnold

Yeah. So one of the cautions, of course, is China. The timing of it, of course, seems very suspect because it's coming at a time when the US is making clear their AI ambitions and commitment to it all. There's a certainly a geopolitical angle to this, which you can't underestimate. And it may be the subsidies that they might have gotten at home to do this might be far, far greater than we were led to believe. So maybe it's not as easy as they make it look. An interesting example I heard on the radio the other day, where they did an open AI inquiry about getting chips out of Taiwan. And is this going to be a problem for the US? The AI, the Gen AI results came back very fairly even-handed, fairly accurately.

When the same query was done with DeepSeek, the answer came back, it became a political monologue about how China does not recognize Taiwan and how it's not a sovereign state. It's always been part of the Chinese mainland, so we don't know what you're talking about. That's the kind of bias that can be built into a model like this, which can maybe, undermine a whole lot of the excitement that we've just been hearing. And maybe you're, you know, you kind of get what you pay for.

Chris Fine

We will see. The somewhat ironic part to note is I mentioned the experiment of running this on a little tiny Raspberry Pi computer, right? So it's pretty slow when it just runs on a Pi, the DeepSeek. But the guy who did the video was one of my favorite YouTube channels. He said, well, I can speed it up. I can attach a graphics processing unit to it. So he basically took a little adapter and hooked up a board that, you know, of the type that you put in a PC, a GPU, you know, a graphics accelerator. So of course, who makes all the chips in that? Nvidia, the company that came under all the attack with DeepSeek, right? So maybe it's not quite as, you know, obvious as it may have been that day on the market, you know?

Jon Arnold

So at worst, it's a wake-up call. You noted earlier as well, which we should leave a good note on for this is that as with any other technology, the Moore's law effect takes place at some point. Invariably, the cost curves in this LLM world are going to only get lower and lower, right? It's not going to be a constant level of high investment. The potential competitive cost advantage that DeepSeek maybe represents might be a very short-lived thing, because you got to go where the customers are. And right now, the customers are all with the hyperscalers. And eventually, it's in their interest for those cost curves to come down to keep it sustainable over time.

I don't think there's any industry where you can have a sustainable competitive advantage based on price. I don't think that's really possible. But there's always room for things to be cheaper as long as the quality isn't sacrificed. And I think that's the big change, the big kind of dividing line here with this is that I think with the OpenAI type of models there's probably a higher confidence in what you're getting than with something like this, a company no one's ever heard of from, you know, from a not a friendly partner to the US. So, it's complicated, but the wake-up call is there for sure.

Chris Fine

Absolutely. And on that note, having as usual, run longer than we thought we were going to run, are we wrapping up?

Jon Arnold

Let me run that by AI. Let's see what DeepSeek says about that. Should we end now? And the answer is, you should have ended 10 minutes ago.

Chris Fine

You guys are rambling on. I could have said that in like half the words.

Jon Arnold

That's right.

Chris Fine

Time to get lunch.

Jon Arnold

Exactly. Let AI re-edit this down for you and you'll get the 200-word version, folks.

Chris Fine

You know what I think we need? I think we need something called Gong.ai, which is like the Gong Show. It listens to everything we're saying, and when we get repetitive and taking too long to say everything, it brings the gong, don't you think?

Jon Arnold

Yeah, that could work.

Chris Fine

I think that's coming, by the way. It may not actually be the gong, but I'm waiting for more tools. There's already starting to appear in tools like Zoom and Teams and Webex, etcetera, where they let you rehearse a presentation and they gong you, or they suggest better ways to say whatever you're saying. I'm waiting for that.

Jon Arnold

Yeah, all right. Well, Chuck Barris was not on the wrong track, folks.

Chris Fine

Gong.ai, right? Exactly.

Jon Arnold

Different from Gong.io. That's a different company. On that note, folks, we're going to round it out now. And I mean now, and because my voice is going too. So, we're at the end of time. Thanks all for listening today, and we hope you enjoyed our podcast and that you'll continue with us as we explore the future of work here on Watch This Space. You can access our episodes at www.watchthespace.tech or wherever you subscribe to your podcasts. And, if you like what you're hearing, we'd love a review, a rating, suggestions for upcoming topics, etcetera. With that, I am done. Jon Arnold here, handing off to my partner.

Chris Fine

Chris Fine here. Thanks again for bearing with us and listening to us for this episode. Jon, thank you very much, as always, for the co-hosting. We will be back next month with another edition of Watch This Space.

Companies named:

DeepSeek, Microsoft, Nvidia, OpenAI