Podcast Awesome

Say No More: What AI Actually Changes [Part 2]

Matt Johnson Season 4 Episode 8

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:18

AI makes it easier to build almost anything. So why does that make the job harder?

In this episode, Matt sits down with Font Awesome founder Dave Gandy and engineer Travis Chase to get past the hype and into the real day-to-day of building with AI. The conversation covers what's actually changing on the team, where AI falls short, and what human skills matter more now than they did before.

If you're a designer, developer, or anyone trying to figure out where you fit in a world where your output can suddenly go 10x — this one's worth your time. Dave and Travis don't pretend the answers are simple. They also don't pretend the concerns aren't real.

Fair warning: Dave also makes a case for revisiting waterfall development. It's more convincing than it has any right to be.


What We Cover

  • Why producing more means your quality bar has to get sharper, not looser
  • The discernment problem — when you can build anything, how do you decide what's worth building?
  • Why saying no is now a more important skill than ever
  • The strongest AI concerns Dave and Travis actually take seriously (energy, training data ethics, governance)
  • Why AI seems to help people become more of who they already are
  • How to stay curious and useful during a major technology transition without chasing every squirrel


Timestamps

  • 0:00 Cold open — from low-level to strategy
  • 0:38 Intro
  • 1:30 Where AI falls short right now
  • 2:00 Quality control when output explodes
  • 2:30 Taste, responsibility, and Jory's point at the snuggle
  • 3:00 The discernment problem and snacktivities
  • 4:20 Simplicity means saying no more than yes
  • 5:30 Chasing waterfalls — does waterfall development make a comeback?
  • 6:00 The strongest anti-AI arguments worth taking seriously
  • 6:45 Energy, ethics, and training data consent
  • 8:00 Technology's evolution and the genie that's out of the bottle
  • 9:00 The Industrial Revolution farmer analogy
  • 9:45 Superheroes, supervillains, and hiring for character first
  • 10:20 Two ditches: navigating between idealism and cynicism
  • 11:10 Rev share and what the world should look like
  • 11:45 Governance, compromise, and garbage design
  • 13:30 Washing machines and making more clean water
  • 14:00 Guiding principles for using AI internally
  • 15:00 Company behaviors as an AI framework: curious, humble, adventurous
  • 16:00 AI helps people become more of who they are
  • 16:45 AI will ask people to operate at a higher level
  • 18:00 Refusing to engage is the riskiest move of all
  • 19:00 We need critical voices — and we need them in the room
  • 20:00 Echo chambers, bad data, and the water story
  • 21:30 The world is hopeful — spend your life in wonder
  • 22:30 Home prices, colonizing planets, and the leap to the real world
  • 23:00 Outro


Credits

  • Hosted by Matt Johnson
  • Featuring Dave Gandy and Travis Chase
  • Produced and edited by Matt Johnson
  • Theme song by Ronnie Martin
  • Music interstitials by Zach Malm
  • Video editing by Isaac Chase

🔗 Font Awesome: https://fontawesome.com 

🔗 Check out episode one of the conversation! https://www.podcastawesome.com/2092855/episodes/19065993-build-week-what-we-made-part-1-with-dave-travis

🔗 Podcast Awesome: https://podcastawesome.com

#PodcastAwesome #FontAwesome #AI #DesignAndDevelopment #TechEthics #SoftwareDevelopment

Stay up to date on all the Font Awesomeness

[00:00:00]

From Low Level to Strategy

Dave Gandy: we are spending less time thinking and worrying about the low level and more time thinking about the high level. That's an outstanding problem to have now. We get to stop worrying so much about the low level stuff, and we get to stay focused on, "Yeah, but what's the strategy?

Yeah, but which one of these should we choose?" Our conversations are changing in a way that as, somebody who's looking over the whole organization, that ex- that really excites me, I gotta be honest.

​Welcome to Podcast. Awesome. Where we chat about icon's, design, tech, business, and nerdery with members of the fun awesome team. I am your host, Matt Johnson, and today I am joined by Travis Chase and Dave Gandy for a number two. I said number two 

Our second conversation about ai,

We're going beyond the conversation of, is AI actually useful [00:01:00] because we covered that in the first conversation,

This time we're gonna get into the conversation of what do we lose when things get so much easier with ai.

And what does it mean to say no more often when just saying yes to whatever AI can plop out is just getting easier and easier.

We're getting a little philosophical in here, but I think you're ready for it. So let's jump into it with Dave and Travis.

Matt: We didn't want this to be like a, a negative conversation, but there all-- there are limits as well. And

Dave Gandy: Yeah.

Where AI Falls Short

Matt: do-- what thoughts do you guys have on what, I mean, we're saying in a broad sense, like there are human things that AI can't replicate. Like you're just saying, Dave, like you, you needed to have a conversation with Dave or with Travis.

Dave Gandy: Sure.

Matt: so where, where in these kinds of projects, and maybe it's specific to a project folks, one of you worked on or folks worked on, [00:02:00] where is it maybe falling short, like right at the moment?

Quality Control Challenge

Travis Chase: one, one interesting path, and the community and-- is talking about this already is A couple different things. One, you can produce so much that now the review of making sure it's good, it can go to production, you can rely on it, you can ship it to customers and not break their day. Like, there's a lot. And so how we solve that problem, where, where do we fit in and curate and, and, and really keep that quality bar high? It's an interesting problem that's continuing to be talked about and need tooling and different things around that. 

Taste And Responsibility

Travis Chase: Also, because you can kinda get in and just do it, you still need taste. You still need to-- And Jory brought up this really great point at the snuggle where he was talking about, you know, we-- we're getting some extra powers [00:03:00] here, but with that comes great responsibility making sure we're working on

Matt: Yeah.

Travis Chase: things, making sure we're still talking to our customers and building the stuff that they want, not just whatever we can dream up because, uh, that takes us in just so many different we're just gonna be divided.

Dave Gandy: Well, the funny thing is what's, what's happening naturally now with this whole discussion is that we are spending less time thinking and worrying about the low level and more time thinking about the high level. That's an outstanding problem to have now. We get to stop worrying so much about the low level stuff, and we get to stay focused on, "Yeah, but what's the strategy?

Yeah, but which one of these should we choose?" Our conversations are changing in a way that as, um, you know, somebody who's looking over the whole organization, that ex- that really excites me, I gotta be honest.

Matt: Yeah, that's cool. It makes me think, um... Yeah. 

Discernment Over Ideas

Matt: It, it, it makes me think that it's the-- [00:04:00] really the discernment piece, right? Because it's sort of like,

Dave Gandy: Yep.

Matt: It's almost like these tools are creating, say, specific to, to the Awesomeverse. we get to have lots of opportu-opportunity for snacktivities, which are what?

Like, kind of the nice-to-haves, and

Dave Gandy: Yeah.

Matt: "Hey, I got this idea. I, I'd love to do th- I'd love to do this." Um, it does create a problem in a sense, like, um, which is a good problem to have, and if you have somebody in the room or somebody that's a, a part of, um, helping decide or part of this conversation that is really strong on the discernment piece, that seems like that would be a really key thing.

It's sort of going along with what Jory was saying is like can do a lot more cool stuff, and we can have our hobby horses and stuff, which is really fun. It's really great. But also discerning with conversations with the customers, what's gonna be most helpful? What's gonna help make this a better [00:05:00] product?

Uh, what, what features do we need to build? Um, we don't wanna go chasing after every single squirrel, and I think that's where the discernment piece-- and that's a very human thing, right?

Travis Chase: Yeah. 

Simplicity Means Saying No

Travis Chase: Yeah, I also think, um, because you can almost anything, uh, one of the things we love to really focus on is simplicity and making

Dave Gandy: Mm-hmm.

Travis Chase: we relentlessly focus on whatever we do, we keep it as simple and easy to use as possible.

Dave Gandy: And what that means in translation is it means you still have to say no more than not. Now that it's so easy to build a feature, it is so much more important now to know, no. No, if that's there, it makes it worse. You think it makes it better 'cause there's more stuff, but more stuff is not better. Anyone who owns a house and has to move can tell you more stuff is not better, right?

The right stuff, that's what you're looking for. You know, and, you [00:06:00] know, it's, it's funny, you know, a little, little line you just mentioned there. 

Chasing Waterfalls Debate

Dave Gandy: Um, we actually are talking about chasing waterfalls right now, uh, i-in the sense of, uh, we just had a discussion. Well, now with like spec-driven development and really wanting to plan things out and have a, yeah, a sense of the spec ahead of time, does waterfall development actually work?

Was-- 'cause the problem with waterfall

Matt: Right.

Dave Gandy: was that the distance in time between what you thought you wanted and what you actually wanted was so large, you might as well not even do it. But now, if you can get it out in a couple of weeks with what you know to get feedback right away and then let it hit reality faster,

Matt: why

Dave Gandy: can waterfall work?

Maybe we should go chasing waterfalls, and maybe it's the rivers and the lakes that we're used to, we gotta leave behind. I, I don't know. I don't know.

Travis Chase:

Strongest Anti AI Concerns

Matt: there like a strong... What, what do you think the strongest AI, [00:07:00] um, anti-AI argument there is out there? And... Or, or which one do you take seriously where you're like, "Okay, I'm curious about this.

I think we need to kinda keep an eye on this and keep thinking it through." Um, there are a lot of doomsdayers out there, um, and but we also don't wanna shut down the concerns that folks have. But are

Dave Gandy: Yeah.

Matt: arguments that folks have that you're, you really are taking to heart right now?

Travis Chase: I think for

Dave Gandy: Yeah.

Travis Chase: I mean, there's

Matt: Sure.

Ethics Energy And Governance

Travis Chase: that, that the whole world is gonna have to try to figure out. Like, you know, it uses compute, it uses power. How do we go about, you know, doing those things responsibly? Um, and there, there definitely is the, uh, the uncomfortableness of, of information it was trained on, and was that gathered correctly in proper permissions and that kind of stuff.

You, you can't deny some of that stuff. Um,

Dave Gandy: Right. Absolutely.

Travis Chase: you know, [00:08:00] some of that stuff, there's no putting the genie back in the bottle. So, like, maybe we, we get better moving forward. I think this is where, you know, you have to have, uh, a lot of folks, um, you know, care, bring up the conversations, constantly push that.

You know, this is where, you know, you need really good governments to come in and help, you know, the citizenship and that kind of thing. You need those kind of things. And we love... You know, we, we can have opinions, and we can talk about those things and that kind of stuff, uh, absolutely, and, and should be, um, thought about and have really good resolutions.

But also we should, you know, not be afraid to also use the tools in what we can h- what we

Dave Gandy: Yeah.

Travis Chase: and, and using those, uh, for ourselves in the most ethical manner that we feel, uh, that we can do. It's... And that's, to me, that's always been the story of technology. You know, you can do amazing things with technology.

You can use horrible things with technology. And so it's, it's just another evolution, maybe one of the most [00:09:00] powerful we've had in a long time, but it is along that evolution, and those same conversations, uh, you know, need to be had. And how hopeful am I? You know, it's like we're just now having really, really, really important conversations about, uh, social media and the effect on, uh, kids and adults and all that kind of stuff, and this is, in my opinion, way more powerful than that.

So yeah, that... It-- There's definitely concerns there, uh, for sure that you gotta keep in mind as well. 

History Lessons And Supervillains

Matt: I always try and break this down to like the simplest, uh, way of, of understanding things, you know. Like, you think about the time, um, when, when industry was like ramping up and there was a huge change in the Industrial Revolution, you know.

And I, I think like, okay, think about the life of a farmer. They were working, hands-on with animals. Their feet and hands were in the dirt more. Along comes John Deere tractors. Well, that was a, a-- that would seem like a threat, right? Like, "Oh, you're gonna change my jo--" [00:10:00] You know, like, the point of, uh, of a farmer if you just have machines doing all this stuff, right?

Um, and so the plus

Dave Gandy: Humanity, humanity's never been different. Humanity never changes on these same questions that have been asked before, and so it's really important in our wisdom that we look backward at how these kinds of transitions have happened before. 

Supervillains And Ditches

Dave Gandy: It makes me think of two things.

Her- superheroes and ditches. on the superhero side, we talk a lot about this company, about hiring for character first and capability second, uh, because we already have a word for someone who is all capability and no character. That's, that's a supervillain, right? That's Zod, right? That's somebody who is all-powerful and does whatever they want with it.

That's evil, right? All capability, no character is pure evil when there's no constraints on it. And so we've, we've got to be careful about and understanding where this goes, right? know, talking about farming of old, right? 

Ditches Compromise And Rev Share

Dave Gandy: We, we talk a lot about this company also [00:11:00] about how every road has at least two ditches.

Um, and it's, it's humans' desire to go find a ditch and live in it. "I'm gonna live in this ditch." It's easier than being out on the road and trying to drive down the middle of it, so I am going to stick with one ditch, right? And the two ditches in this one are, it shouldn't matter the way the world is.

We have to work towards the way the world should be. That's a ditch. The other ditch is, well, this is the way the world is. There's nothing you can do about it, right? Those are both the ditches, right? And if you wanna live in either one of those, you can. And for whatever reason, those tend to be left and right coded in our world right now.

It's kind of stupid. but what if, what if both are ditches to avoid? What if both at the same time that we want to acknowledge the reality of the way the world is, right? And that we're gonna work within it, and at the same time, we wanna move towards the way the world should be. How should the world be in this?

Well, if you stole from other people without a license, you need to pay for it, right? [00:12:00] And I'm not just thinking a one-time payment. There needs to be rev share. There needs to be permanent rev share for everyone you illegally scraped, and they get it forever because you built your stuff off of it, right?

This is not a challenge. This is not... This has happened before. We can solve it again. Now, is there more to it that we need than that, right? Is this, is this problem actually more complicated than that? Sure. Right? But it's at least a starting point for the discussion. 

Governance And Compromise

Dave Gandy: But it also means that we, uh, maybe we shouldn't have people who are geriatric, like fully geriatric, running a country.

Maybe like both options eighty years old is really stupid, right? Maybe that's just... Maybe we should throw someone in there, I don't know, really, really young, like in their fifties. Like super young, right? Like just insanely young in their fifties. Um, right? Like let's... Come on. Come on here. G-g-give me some options.

Give me something real here, uh, to be able to work with and, you know, give me a Congress that where people understand that, that governance, the, the governance is [00:13:00] compromise, right? It's not standing on principles, and it's not no compromises. You know what? We talk about this all the time in design. You know the worst design in the world, as soon as somebody says, "No compromises design," you know what that is?

Garbage design, right? It's the, it's the same way in governance, right? You want compromises because people are naturally gonna notice one side of the road and tell you to steer away from it, and the other people are gonna notice the other side of the road, and the only way you drive down the middle of the road and make any progress is by listening to both sides.

And it's always been this way. And there are people who just wanna live in a ditch because I tell you what, it's easier to live in a ditch. Rather than recognizing that these things are hard and working through them, it is much easier to say, "Nope, it's simple. Live in this ditch. That's where I'm gonna be.

I'm gonna be over here in the ditch."

Matt: Yeah.

Dave Gandy: And that doesn't really help either.

Matt: Yeah.

Dave Gandy: Um, so it'd be great if, you know, if we could

Matt: Clear,

Dave Gandy: move on.

Matt: and character. Yeah.

Tech Optimism And Wrap Up

Dave Gandy: Yeah, and, and, and, and a reasonable, a reasonable spirit behind when new things happen. Being reasonable about it, recognizing that, you know, there's one way to make [00:14:00] sure, um, uh... You know, we, we had this problem with washing machines where we decided they were using, you know, a lot of water, and there are two ways to solve it, right?

You can go and you can, say that they're not allowed to use as much water anymore. That's one way to solve it, or we could just, you know, make more clean water. We, we, we're humans. We're actually really good at making new technology.


Guiding Principles For AI

Matt: folks are looking for answers. They're thinking for clear ways to think through this stuff. They're thinking about ethics. but what is it that's guiding how we make decisions about how we're using AI internally and, do you kinda have some, guiding principles m- that might be folks, know, thinking through about how they might wanna use it?

Travis Chase: Yeah. Yeah, I think for me it's right now that this stuff is changing rapidly, and it's gonna continue to change rapidly. So I think really having the spirit of, um, exploration, right? Having a spirit of adventure. Uh, being

Dave Gandy: [00:15:00] Yeah.

Travis Chase: uh, we can and use it the most effective way we can.

Dave Gandy: Yeah, I th- I think that sums it up. Um, for me, those... The, the company behaviors really are how we try to think through things Curious and reasonable, humble and helpful, adventurous and dependable, and we're gonna be more of who we are. That's a big thing we look for in this company.

How do we help everyone who works here be more of who they are? Uh, and the really strange thing is AI seems to [00:16:00] do that, right? if you're willing to try it out, uh, my experience with it about last week is that's exactly what it does. So I'm, I'm really excited to see our people become more of who they can be.

Uh, that's the m- that's the thing that I'm, I had not expected to walk away from this with at all. Um, you know, uh, starting out, you know, skeptical about, uh, A, is this gonna actually work, and B, how is this gonna feel, and we're gonna like the other side of this? Uh, and I've been pretty surprised and pretty shocked by it.

Um, I think another thing is, uh, this is going to ask a lot of people who may like being in the lower levels, it's gonna ask you to come up. It's gonna ask you to come up to a higher level. It's gonna ask you to think more critically from a higher perspective. and that may not be your f- your favorite thing.

but it's also gonna be what the job probably becomes. and sometimes that's the nature of a job, 

 sometimes not everything is exactly what we individually want, but figuring out how to... You know what I always have found [00:17:00] really fun in a job? What do others find valuable, right?

What's valuable for me to be working on? That's, yeah. I mean, that's, and I guess that's why we're always tuned towards user experience first before anything, um, is 'cause that's what we always care about, right? How are how are we gonna serve them, be more of, of who we are?

How are we gonna find our place in the world? And I think this fits along that story. It's gonna be a lot of change, but there's real hope left over. you know, you, you can imagine somebody, at the turn of the Industrial Revolution, when these new machines start coming out, and you see all the ways that they're just not as good as a person, and that's all you see.

Uh, you've gotta look towards the future that could be, uh, and then make sure it happens. Make sure the good version is what happens. that's what being in the world where good things happen is. We, we can make sure nothing ever good happens by not let any progress happening, right? But I, I kinda like that food isn't forty percent of my paycheck anymore.

I, I like, penicillin, right? I like vaccines. Those are good, right? We wouldn't have [00:18:00] those if at the beginning everybody just told us how this was all bad and worse than before. And so there's, there's a reality of that too, and you can also imagine how somebody at the beginning of that Industrial Revolution seeing these machines and how it may not be as good, refusing to be a part of that Those are the people that are gonna get hit the hardest.

If you are fully unwilling to consider that possibility, play around with it, it's gonna go for you like it's always gone before when these, when these transitions happens. And you don't have to be that person, right? You get to be someone who is curious and reasonable, adventurous and dependable, and humble and helpful to those around you.

This can be fun, right? This can be a new legitimate adventure, and that's the part that excites me the most.

Matt: Yeah. I think in years, especially just how crazy the world seems right now, I, I've... And it kind of goes along with what you're-- what I was picking up what you were saying, [00:19:00] earlier on, Travis, of like, this might not be the ideal of what I would prefer, but it is the reality of the way the, the world is, and let's, stay positive and curious about it and see where it goes.

And folks are not-- if they're deciding not to engage and to be doomsday about kinds of tools and the way the world is changing or whatever, we also need the critical voices. We need to be

Dave Gandy: Well, [00:20:00] and

Matt: can,

Dave Gandy: yeah.

Matt: try and figure out how to make this better together. And yes, raise the red flags when it's necessary, but

Dave Gandy: Yeah.

Matt: reality that we're dealing with, you know?"

Dave Gandy: ahead.

Travis Chase: h-h-and the history of the, albeit the great scheme of things, young industry is constant

Dave Gandy: Yeah.

Travis Chase: That's just the thing.

Matt: Right.

Travis Chase: to

Matt: Mm-hmm. Mm-hmm. Yeah.

Hopeful Closing And Thanks

Dave Gandy: Yeah, there's a, uh, there's a kind of closed-mindedness, where you only see facts that fit your narrative. There's, a story going around that, these data centers that AI needs are using these really, really horrible amounts of water, and the person who first wrote this article and has been quoted ad infinitum, did a conversion wrong.

and they had it Be a thousand times worse than it was. [00:21:00] They just misunderstood one of the metrics, they quoted it a thousand times higher than it is, And but that's somehow the story that's carried because there's also, there's a narrative out there that everything is awful, That's the thing that's happening with social media. They try to convince you that everything in the world is awful. No matter which corridor of the isolated chamber you live in, we have created these isolated chambers for ourselves just with the sheer fact of what information we respond to, who our people we follow are.

We've created these silos, right? Be willing, be brave to consider things that are outside of it. Be skeptical of, data that fits a specific narrative over and over again. And if new information comes about, be willing to hear what it is. Be curious, be reasonable. that's the hopeful part for the rest of our lives.

As we grow older, the world is not gonna get simpler, our belief in whether it can be good is up to us. And then when we decide what that is, we make sure to [00:22:00] follow up and make it happen by our expectations, What we hope for is what we make sure happens. And if we find everything to be awful, and we're so smart, we see past it, and everyone else is an idiot but me, That's not g- That's just another echo chamber to live in. And the world is truly hopeful, right? The world is amazing. The world is constantly changing, and there are things to be constantly fascinated and wonder, right? To spend our lives in constant wonder is a joy, We get to do that, and we get to be alive while this is happening too, right?

Every one of these major advancements that's happened before in the history of, humanity, we get to be here for it. This is cool. It's like we're suddenly going to colonize other planets in some way, right? Because you just wait. You just wait until this makes the leap from software to the real world.

The speed at which we can imagine a new thing that we can go and make it with that speed, oh, that's gonna be new, right? Maybe we can finally get to the last remaining thing on that cost of living that hasn't been [00:23:00] touched in a hundred years. It's home prices. Let's bring them down. Who's gonna do it?

Who's with me? Let's do it. Let's get that down to seven percent of your paycheck rather than forty. Come on, let's do it, guys.

Matt: thanks Dave and Travis for, uh, the spirited conversation. This is really fun and, uh, a, a real rarity and a treat that w- I can get both of you guys together at one time. It's amazing. This is ongoing conversation. the conversation will be different a month from now, a year from now.

but in the meantime, thanks for carving out some time. And, uh, I guess,

Dave Gandy: Yeah, man.

Matt: something.

Travis Chase: Font Awesome?

Dave Gandy: Go make something awesome.

Matt: right, thanks guys.

 I'd say that's about a wrap for our second episode on the conversation about ai, and I'm sure we're going to continue to have the conversation in the future.

Thanks to Dave and Travis for carving out some time in a very rare appearance with the both of 'em at the same time,

and helping us to kind of get our heads around the fact that maybe [00:24:00] these deeper conversations about AI aren't actually about AI at all.

Maybe the conversations are more about taste, responsibility, learning what to say no to and when.

And figuring out how to be more of who you already are.

I'd say that's about it for now and, uh, per usual podcast Awesome. Is produced and edited by this guy right here, Matt Johnson. The podcast awesome theme song was composed by Ronnie Martin, the music interstitials were composed by Zach Malm, and we get some extra video editing help from Isaac Chase and you know the rest, go make something awesome.

Matt: [00:25:00]