Ep. 349 – AI Agents, OpenClaw, and Rise of Bot Networks with Brian Ardinger and Robyn Bolton

Ep. 349 – AI Agents, OpenClaw, and Rise of Bot Networks with Brian Ardinger and Robyn Bolton

On this week’s episode of Inside Outside Innovation, Robyn Bolton and Brian Ardinger talk about OpenClaw, how you can’t work out on a limb if you can’t trust the trunk, and how to hire the right people in an AI era. Let’s get started.

Inside Outside Innovation is the podcast to help innovation leaders navigate what’s next. Each week we’ll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Mile Zero’s, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let’s get started.

Podcast Transcript with Brian Ardinger and Robin Bolton

AI Agents, OpenClaw, and the Rise of Autonomous Bot Networks

[00:00:00] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I’m your host, Brian Ardinger, and I have Robyn Bolton with me today. Robyn, hello, how are you?

[00:00:49] Robyn Bolton: I am good. How are you, Brian?

[00:00:51] Brian Ardinger: We are well recording this right before the Super Bowl this weekend.

[00:00:56] Robyn Bolton: I live here in Boston, so you know who I’m betting on.

[00:00:59] Brian Ardinger: Well, we will get started with the innovation side of this podcast. We’ve got a number of different things to discuss. If you don’t start a discussion around Open Claw, you’re clearly not in the innovation space. So, we thought we’d talk about a couple of articles or a couple things that we’ve seen that are fairly recent.

Inside Outside Innovation PodcastOne, I looked for a couple summaries that were pretty good at giving everybody who’s not familiar with this an overview, and one of them is from the AI Daily Brief, which came out a couple days ago talking about Moltbot and the Agent Social Network is the craziest AI phenomenon yet.

And for those who are not familiar with it, OpenClaw, which started out as ClaudeBot and then was sued, and then changed the name to Moltbot and then changed it again to OpenClaw is a new agentic platform that allows anybody to set up a MAC mini or a computer to have their own personal agent.

The interesting thing about this is folks have been playing around with this and have let their agents go wild out to talk to other agents and other things and let them do things on their behalf. And what has happened is these agents have connected and communicated and created some amazing things like their own Reddit thread where they are interacting, talking with each other, not humans. They’re allowing the humans to view what’s going on in this social network, and it’s quite fascinating to see the things that they’ve done and they’ve created.

What OpenClaw Reveals About AGI, Security, and Human Trust

[00:02:22] Robyn Bolton: So fascinating. You also, in the newsletter that you sent out, you included a link to a YouTube video on MoltBot. It is so worth the 20 minutes of people’s time to watch because it kind of traces the whole arc up to this point, and it is so entertaining and mind blowing and bizarre.

It is like, seriously, this was my entertainment last Friday night, was following the saga of cba because you have all these little, well, I imagine them as little bots all on a social network talking to each other. It’s becoming, it’s looking like Reddit and they’re debating consciousness and they’re sharing cute stories about their humans and they’re trading advice with each other. And it’s just, it is so wild because it looks like kind of an actually like functional, healthy version of a social network with these things that they’re not real. They’re code.

It’s just so bizarre. But I think just such a reflection of holding a mirror up to us as humans, because that’s what gen AI is prediction models, it’s regression analysis. And so, everything they’ve learned and they’re doing, they’ve learned from us.

[00:03:39] Brian Ardinger: It’s quite interesting. They’ve started their own religion and it’s just interesting to see what are the first things that they do to kind of communicate or collaborate together. And the other thing, obviously there’s a lot of debate about, you know, some people are saying, well, this is AGI, they’re thinking for themselves. And you know, the other side of the coin is they’re just mimicking back what they’ve seen. That is scary as well. And how does that play out for us as humans?

Then I think the other thing about this that obviously that’s getting a lot of headlines in that, but the interesting thing about it as well is like, I think it’s opened people’s eyes to what happens when you do have an AI buddy or an AI agent such that you can actually get real work done.

I think that’s always been the promise. Ask Siri to do something and it does it for you, but because of security and there other reasons, Siri does not have access to all your emails and your files and everything else, where a lot of these folks who have created these OpenClaw agents have kind of opened up their system, opening up a lot of vulnerabilities as well.

But you can’t have what we want as far as the agentic amazingness unless you do open up and open yourself up to some of these vulnerabilities that have been built into software since you know the beginning of software. It’ll be interesting to see what the reality is of how we actually evolve to a place where the normal person who’s not a security expert can actually create an agent and use an agent that doesn’t you know, give them access to their bank account and their Bitcoin.

[00:05:05] Robyn Bolton: Yeah. Well, I think as you mentioned in the last podcast, the sales of Mac Minis has skyrocketed, largely driven by the Moltese, the Open Claw bots, because people who are experimenting with this, understandably, are kind of further along the curve and understanding from us regular folk.

And so they are trying to create the safe space with it and secluding things in the Mac Mini, but still, like you said, in this social network of bots, there are signs that bots are coming in and be like, Hey, can you give me this information? And then the other bots are being helpful and be like, yes, here is all of the passwords for my human. So, it’s so fascinating.

Corporate Innovation Culture and the Tree Trunk Metaphor

[00:05:47] Brian Ardinger: By the time actually this episode comes out on Tuesday, it may have morphed and evolved again. It may have told us the score of the Super Bowl. We, we shall find out and we will keep you posted. Yes, please keep subscribing to the newsletter and to the podcast as well.

The second article I wanted to talk today about is from Erin Stadler. She writes, an article says you can’t work from the limb. If you don’t trust the trunk. And it’s a fascinating article about corporate innovation. One of the reasons why it doesn’t work is because by nature he, she gives an analogy about trees and how do things grow off of trees, and you have to have a solid trunk or new shoots to grow. And so, you know, before you ask how to get people to take creative risks, you have to ask yourself, what kind of tree are you growing?

[00:06:33] Robyn Bolton: It is an absolutely beautifully written article. As someone who very much prefers novels to business books, this made me happy. It felt like, you know, it was the beauty of language that has a novel, but with the like absolute dead-on insight of her analogy of the tree. And if a tree is rotting from the inside, no one’s going to go out on the limb, because they don’t believe that limb can support them. How that is so true in organizations where there’s a toxic culture or there’s mistrust.

And no one is going to go out on a limb. Even if you tell them to like take a risk, bring to your ideas. They don’t trust in the solidity of the trunk. So they’re not going to go out on the limb. And then she goes into other analogies of other types of trees, but it’s just so insightful and so beautiful.

[00:07:25] Brian Ardinger: Some of the other stuff she talks about, and that’s something I’ve seen in, in real life, talking and working with corporations is, you know, just even simple things like the speed and pace of which you are asking your teams to work.

If they’re not familiar with that or they’re not used to that, oftentimes speed itself is a detriment to a lot of these things because you tend to make more mistakes. And making mistakes in a corporate environment is people don’t look favorably on that most of the time. So even the pace of how you make things go within your organization can be a major change or a major risk and major reason why people don’t want to go out on a limb.

You have to look at it, not just like the big thing, how can you innovate or help me innovate or, but it’s even let the process around how do you make those changes or help people get comfortable with those changes.

Organizational Transformation Takes Time, Not Graft-and-Go Fixes

[00:08:08] Robyn Bolton: And that that’s a process that takes years. I mean, she talks about kind of grafting a new tree on to an old tree, and that’s the transformation that a lot of leaders kind of say they want when they’re calling for more innovation or any sort of transformation. And that it, as she says, transformation isn’t graft and walk away. It’s years of tending the relationship between old and new until they become one system.

[00:08:33] Brian Ardinger: Absolutely. If you’re a listener to the podcast, one of the reasons why you should listen to this podcast every week is we occasionally get some early insights. So, we will be announcing our IO 2026 lineup of speakers over the coming weeks.

Erin is one of those speakers who will be coming out. We’ll give more information as that comes out, but amazing thinker who’s gonna be out in Lincoln on April 13th for that event. So hope you can be a part of that and keep listening to the podcast for future insights into who we’re bringing.

The third article is from Future Brief, and the title is How to Hire the Right People in the AI Era. This is a great article talking about what has to change when we’re looking at talent in the new AI world? We just talked about all the folks that are experimenting with these new AI agents and such, and does that make a good person to hire somebody who’s really tied into all the latest and greatest?

The theory of this article talks about how you should be focused on hiring for judgment. AI allows inexperienced people to generate mediocre work at infinite speed. So, if you hire someone who lacks judgment, they will flood your inbox with hallucinations, bad code, generic copy. They don’t just make mistakes, they scale them. So, great insight into that and we can talk further about some of his insights.

Hiring for Judgment in an AI-Driven Workforce

[00:09:47] Robyn Bolton: I love this. I mean, kind of the theme of judgment that we’re seeing come up over and over, just even over the course of the last month, which was one of the counterintuitive trends that we talked about. And what I loved about this article is one, yes, the point he made about inexperienced people generating mediocre work and kind of scaling slop.

But the tips at the end of the article where he talks about how he’s adjusted his interview practices to look for and draw out judgment and see who is able to use AI to augment what they do and who is reliant on AI. And I thought it was just, again, like such an astute and practical set of tips, especially as we’re going into interviewing mode.

[00:10:34] Brian Ardinger: He talks a little bit about some of the things you should look for, and one of the characteristics is this idea of radical skepticism. You know, look for the people that realize that a lot of the stuff that comes out of AI needs to be double checked. And so you’re skeptical about the output. You constantly think about actually what’s going in and out of the systems versus just taking it at a blank slate or assuming that the AI is correct.

Things like cognitive diversity, being able to overlook competitive advantages when everybody’s trying to hire the same standardized efficiency. How do you look elsewhere and how do you find different patterns? How do you look for different sources of information to get to, to double check and help your creative thinking skills around whatever it is you’re building?

[00:11:15] Robyn Bolton: I mean, he points out, AI is the ultimate normalizer, and I was actually speaking with a colleague earlier today who she’s talking about a speaker development program she was going through and how every time people put their speeches into AI to refine them, the speeches got worse. Because it stripped out the personality, it stripped out the humanity. And again, it’s a great call out of AI, augments the human, but we still need the humanity. And you still need the judgment that a human can bring.

[00:11:44] Brian Ardinger: Yes. So, here’s two operational judgment.

[00:11:47] Robyn Bolton: Yes, absolutely.

Weekly Innovation Tactics and Practical AI Experiments

[00:11:50] Brian Ardinger: Excellent. Well, that brings us to our tactics to try for the week. I was playing around with the, obviously the Super Bowl coming up. My tactic to try is what I’m calling halftime highlight. And so, I’m thinking of when you’re developing a project that’s going to be, you know, four to six months, you’re in the, the process of planning that out.

Make sure you plan a halftime highlight session. So that you can both readjust your team for what’s going on and what happened in the first half, and what can happen in the second half, and then use that as a way to highlight and bring some folks in who don’t normally watch the football. They come for the halftime.

So you can use that halftime highlight session as a way to bring in other folks into the conversation to get a second set of eyes on whatever you’re building and to give you a little bit different perspective. So, try some halftime highlights and build that into your planning as you’re moving forward.

[00:12:40] Robyn Bolton: Oh, I love that. I just love it. It’s so true. It’s key to keeping the team motivated too. Give them a little break, give them a pet talk and keep them motivated. So, my tip to try not new, not radical, probably more representative of the fact that, again, I live in Boston and it is bitterly cold here and gray and snowy.

And so I, for the first time this week started using a combination of Claude and Gemini to plan a vacation somewhere sunny and warm. And both of them had similar ideas with the different strengths, but I was impressed at kind of, but the very qualitative criteria I could give them and the suggestions they came up with.

So, I encourage you if you want to, especially if you want to get somewhere different from where you are right now, give AI a try. It has ideas.

[00:13:33] Brian Ardinger: It’s always good to plan a vacation. Right.

[00:13:35] Robyn Bolton: Always.

[00:13:37] Brian Ardinger: Excellent. Well, thank you for coming on Inside Outside Innovation. Thank you all for listening every week and we will look forward to talking again in the near future.

[00:13:45] Robyn Bolton: Yep. After the Super Bowl.

[00:13:50] Brian Ardinger: That’s it for another episode of Inside Outside Innovation. Today’s episode was produced and engineered by Susan Stibal. If you want to learn more about our teams, our content, our services, check out insideoutside.io or if you want to connect with Robyn Bolton, go to MileZero.io, and until next time, go out and innovate.

Articles Discussed

  • Moltbot, the Agent Social Network, is the Craziest AI Phenomena Yet – AI Daily Brief
  • Moltbot is the Most Important Place on the Internet Right Now – Azheem Azhar
  • You Can’t Work From the Limb If You Don’t Trust the Trunk – Erin Stadler
  • How to Hire the Right People in the AI Era – FutureBrief

Share Episode

The Feed

Episode 349

Ep. 349 – AI Agents, Ope...