Today we are talking about The Good and the Bad of AI , How our panel feels about AI , and you guessed it more AI with guest Scott Falconer. We’ll also cover Field Widget Actions as our module of the week.
Listen:
direct LinkTopics
- AI and Social Isolation
- How We Use AI
- Friction and Independence
- Stack Overflow Debate
- Collaboration and Team Culture
- Is AI Inevitable
- AI Hype Meets Costs
- Adoption Cooling Signals
- Pricing Inequality Risks
- Open Source and PRs
- Requirements and LLMs
- Easy Tools Not Always Right
- Juniors Learning and Patterns
- Human Value and Ambiguity
- Losing Cognitive Endurance
- AI vs Social Media
- Uniquely Human Skills
Resources
Module of the Week
- Brief description:
- Have you ever wanted to enhance the Drupal content editing experience by allowing site builders to attach actionable buttons directly to field widgets on entity forms? There’s a module for that.
- Module name/project name:
- Brief history
- How old: created in Oct 2025 by Artem Dmitriiev (a.dmitriiev) of 1x Internet, a founding member of the AI Initiative
- Versions available: 1.0.0-alpha1 and 1.3.0, both of which works with Drupal 10.3 and 11.1 or newer
- Maintainership
- Actively maintained
- Security coverage
- Test coverage
- Documentation - includes Markdown files that explain how to set up and extend its capabilities
- Number of open issues: 12 open issues, 4 of which are bugs
- Usage stats:
- 24 sites
- Module features and usage
- With this module installed, a site builder can attach action buttons to form fields in Drupal entity forms, for example for creating nodes or taxonomy terms
- What happens when you click a button depends on what processor you associate with it, and the settings you configure for the processor. Processors can be provided by other modules, like AI or ECA.
- For example, you could attach a button to a tags field that when clicked will send the content of the body field to an AI agent that will return a set of suggested tags. Or, you could have it trigger an ECA model for a more deterministic flow
- This is all done using a plugin framework implemented by Field Widget Actions, so you also create your own custom processors to be used with action buttons
- One of the things that got me excited about working with the team behind Augmentor AI was the approach that module used to make AI something a user would manually trigger, and then can curate before the suggestions are saved. Field Widget Actions allows that same approach to be implemented with the AI ecosystem that is growing by leaps and bounds thanks to the team involved with Drupal’s AI Initiative
- It’s worth noting that Field Widget Actions used to be a submodule of the AI project, so if you’re using a version of that older than 2.0, you may already have Field Widget Actions available in your codebase
John: This is Talking Drupal, a weekly chat about web design development from a group of people with one thing in common. In common. We love Drupal. This is episode 5 42, another AI show. On today's show we're talking about the good and the bad of ai, how our panel feels about ai and you guessed it. More AI with our guest, Scott Falconer will also cover Monk Field widget actions as our module of the week.
Welcome to Talking Drupal. Our guest today is Scott Falconer. Scott helps humans and machines think together. He leads applied AI at Acquia, serves on the Drupal AI initiative leadership team and authored managing ai, drawing on a background in cognitive science and enterprise scale architecture. Scott focuses on the messy, practical mechanics of integrating AI directly into everyday workflows.
Scott, welcome to the show and thanks for joining us. Glad to be here. I'm John Zi, solutions architect at EPAM, and my co-host today are joining us For the fourth and final week. Katherine Buku, backend developer, architect at Mindcraft. She's just a normal backend developer, architect, trying to build nice things and not delay projects too much.
Katherine, welcome to the show. Thanks for joining us for the last four weeks. Hopefully it was enjoyable.
Catherine: It was lovely,
John: fabulous. And last, but certainly not least, Nick Laughlin, founder in Enlightened Development.
Nic: Enlightment, happy to be here. Looking forward to this show.
John: Nick, it's an AI show. You don't have to lie to us.
You're cautiously optimistic about being here. We know we. It's alright. And now to talk about our module of the week, let's turn it over to Martin Anderson, includes a principle solutions engineer at Acquia and a maintainer of a number of Drupal modules and recipes of his own. Martin, what do you have for us this week?
Martin: Thanks John. Have you ever wanted to enhance the Drupal content editing experience by allowing site builders to attach actionable buttons directly to field widgets on entity forms? There's a module for that. It's called Field Widget Actions. It was created in October of 2025 by Artem Dimitri of one X Internet, a founding member of the AI initiative.
It has 1.00 alpha one and 1.3 0.0 versions available, both of which work with Drupal 10.3 and 11.1 or newer. It is actively maintained, has security and test coverage, and for documentation includes markdown files that explain how to set up and extend its capabilities. It has 12 open issues, four, which are bugs, which is not bad considering it is officially in use by 24 sites according to drupal.org.
Now with this module installed, a site biller can attach action buttons to form fields in. Drupal entity forms, for example, for creating nodes or taxonomy terms. What happens when you click a button depends on what processor you associate it with it, and these settings you configure for the processor.
Processors can be provided by other modules like AI or ECA. For example, you could attach a button to a tags field that when clicked, will send the content of the body field to an AI agent. That will return a set of suggested tags, or you could have it trigger an A ECA model for a more, more deterministic flow.
This is all done using a plugin framework implemented by field widget actions, so you can also create your own custom processors to be used with action buttons. Now, one of the things that originally got me excited about working with the team behind the Augmenter AI project was the approach that.
Module used to make AI something a user would manually trigger and then can curate the result before suggestions are saved. Field widget actions allows that same approach to be implemented with the AI ecosystem that is growing by leaps and bounds, thanks to the team involved with Drupal's AI initiative.
That's worth noting that field widget actions used to be a submodule of the AI project. So if you're using a version of that older than 2.0, you may already have field widget actions available in your code base. But let's talk about field widget actions.
John: So Martin, I think, you know, like the first question I'm going to ask here is actually not related to this, but related to its integration with TAG gify.
Have they, have they like worked that out yet so that you can use this with, with the kind of tag interface? Or is that still being worked on in the issue queue?
Martin: That's a great question. I, I don't actually know the answer to that, unfortunately.
John: All right. That's okay. I mean, I think like this excites me as a, on a, on a, on a real high level just to have the ability to kind of drop those buttons in and, and allow for, for better AI interaction.
Another, another module that, that uses similar functionality actually integrates with the the image library. So when you add an image, you can actually select from a dropdown and select AI and put a prompt in and it'll generate an image for you. But like, things like this, for me, feel, feel super important and make feel like they make a, the AI modules and, and backend and Drupal a lot more accessible to folks.
Martin, any any interesting use cases that you're, you're seeing with this or that you haven't built yourself?
Martin: So I, I think for things like summarizing text, this is great. I, I will say, I think this is a much better approach to me than sort of some approaches I've seen of kind of assuming that you can on node save as an example, let's say send the body off to ai, get a summary back, and then automatically save that without sort of that human in the loop.
So I think by keeping this a manual step in the process, you sort of follow that best practice, basically bake in that best practice of keeping a human in the loop. And, and that to me is why I'm really passionate about, about this as something that is a much. You know, what's the word? Disciplined, I guess approach to, yeah, to implementing AI and integrating it into sort of content workflows.
John: It makes it like optional, right? Like, 'cause I, I use an automator on my personal website that, that basically goes out and writes the the meta description for a post once I save it. So I save it goes out, does it, puts it in, and then going forward it doesn't update it unless I, unless I tell it, you know, I, I remove that or tell it to do it.
So, but I agree with you that this feels a lot more like, you know, human in the loop or, or gives you that ability to say, no, I'm gonna put the tags on this thing myself, as opposed to pressing the button that's gonna AI generate the tags. Right. Well, so yeah, I, I can see what you're saying there.
Were you gonna
Nic: say, Nick, I, I, I like this because the, the other side of it obviously, like, I, I feel like this could be really, it feels like a really pluggable system for building, almost like mini previews. So for example, imagine you have a, an e-commerce store, you're building a new product, and you want to have, you kind of have a semi curated list of related products.
Like I could see this being a button where you click and it uses some logic to pull in, like, oh, based on price and tag and whatever. We think these three products are related products. You click the button, it pulls them in, shows them to you, and then the editor can be like, oh, that's great. Save it if, or like, no, I don't want this middle one, I want a different one.
And they give her the middle one and choose a different one and put it, or they wanna change the order for some reason. So it gives them the ability to kind of automate a piece of it when they're ready. It doesn't use, may or may not use ai, but it, it's an editorial workflow that's easier than having to manually use like the entity browser or something to find, find things like you can kind of semi-automate something.
I can also see a bunch of other. Situations like if you have a, something that's coming from like a data lake or another endpoint that you're, you're kind of like aggressively caching. You don't wanna really like clear cache necessarily for something, but the editor is like, you know what? I just added this thing up there.
It's not showing up here. Click this button and it will do some sort of like targeted request of the API to pull that data in. I can see that being really useful at, you know, for editors, you don't have to give them access to just clear the whole site cache. It's just like, Hey, whatever this particular instance is, call the API get that data refresh where you need to refresh and then let, let Drupal do its thing.
This seems really it seems like it would be really powerful for, for situations like that. Of course, ai. Version we've already talked about. And that's where it came from. So I think people already already decided to use there.
Catherine: Seems, it seems powerful and that's, that's good. My initial instinct with anything like this is we're gonna end up with buttons all over the place and UI giving editors too much stuff to do that we shouldn't give them to do.
I'm definitely on the, on the path of, I, I want my editors to have less functionality. I like, I'm, I'm always fighting the battle of removing things That sounds like, like you should not be able to do that. Why did we let you do that? Who wrote this three years ago that gave you the option to do that? We should take that away.
So any, I mean, it sounds super cool and I agree with John and Nick as you were saying, I, I do feel like there's the potential here for this to just be a, a runaway of, of a million things that editors can do that they probably shouldn't be doing. Yeah,
Scott: it solves a real problem there though, that, you know, when it comes to AI and agents, everyone generally jumps to a chat interface and mm-hmm chat interfaces are great, but they're also the worst of the worst of the open-ended, do whatever, no guardrails type things.
Where I really like to see AI for users is a, a very specific place where it's kind of known what's gonna happen and you have some like guardrails around it. You know, there's already these kind of fundamental questions about when you're in a chat bot and you're telling an agent to do something, is that agent acting on behalf of you?
Does it have your permissions? How is that working? Right? And it's just a ton of open ended questions. It's also an area for Drupal that it's really hard for us to compete with the bigger companies that deal with chat interfaces and have all the tooling and everything there. But having very specific guided actions where the user doesn't even know or think that they're using ai.
We use this all the time when you're using autocomplete on your phone or autocomplete in your email, that is an agent that is AI working and doing something with you. But it's just so inherently built in and it's very targeted in action. So agree. Like I don't wanna see a proliferation of buttons, but also there is the trade off once you move to a chat interface, there really are no buttons anymore.
And then there's a real lack of guidance in what the users do. Yeah.
John: So I mean, I think we're all in agreement here that like the you know, the, the interface right, should be, should be considered. And, and the, the, the, the usability of not having 50 million buttons I don't necessarily know if I agree with giving content editors less functionality.
I just think like common sense functionality is probably the way to go. All right, Martin, well thank you for, for bringing us a again, another on topic module of the week. If folks wanted to connect and suggest a module of the week, how could they go about doing that?
Martin: Always happy to discuss candidates for module of the week in the talking Drupal channel of Drupal Slack.
Or folks can reach out to me directly as man clue on all of the Drupal and social platforms.
John: Awesome. See you next week.
Martin: See you then.
John: Okay. So on today's show, we're gonna take a slightly different approach to how we usually do things and, and have more of a panel discussion about ai. I'm gonna play the moderator and our hosts and guest hosts will be our panel. So we're gonna jump right in. And first of all, let's talk about ai making, making people kind of more independent.
You know, it feels like, AI gives, gives folks a, a certain amount of independence. And I'm wondering, does that to our panel feel empowering? Does it feel like too much power to maybe folks that don't maybe know how to use it? I think we've all seen you know, folks going to chat GPT and asking about, you know, putting personal information in there, right?
Or developers maybe asking AI for, for code and, and, and then just pushing that right into a repo, into production, right?
Nic: Well,
John: like what do we think the, the independence is too dangerous, I guess is where I'm going with that one.
Nic: Let me I'll take this first because I think it's kind of interesting 'cause this is something I know.
Katherine, I'm very interested in hearing your, your opinion on this because it's something you, you've mentioned a couple times, but as I was preparing for today, I was thinking about this and this particular piece hasn't actually affected me yet really all that much. A a co. Most of my clients are very much taking a cautious approach to ai.
And there's a couple of juniors and, you know, there's regular, you know, restrictions and things around what they can use when they can use it, but they, they have access to it. Some juniors are asking fewer questions because they're getting some answers from ai. But this particular piece hasn't affected me directly, but I just realized as John was reading this, that there is one place where I've seen this and that is as everybody knows, I'm a Lego fan.
Maybe Scott is just learning that now, but I'm a fan of Lego. There's a, a website and a community that I'm part of called Re Brick where I mean the primary thing is, is people build own models and some people sell them there, but there's a, a sub form of that site where you can basically take a picture of a Lego piece that you found that you don't know what it is or where it came from, or if it's even Lego and other community members will help you identify it.
And I was sorting through some bulk that I had and had, I don't know, half a dozen pieces that I had no idea where they were from, couldn't find them. So I took pictures and posted them on the forum and sure enough, people in community identified them. I mean, within an hour. I mean, it's kind of insane how quickly these people can identify a random piece from one set in 1992.
And after about a day somebody replies like, why are you commenting here? Just post on, I don't know, some app. Like it uses air recognition will tell you the part and then you would've had it quicker. And my response was like, well, the whole point of this is because it's fun to talk about and share the pieces that I found in this bulk lot.
And people can talk and then I can look up like, yeah, if I only cared about identifying the piece, I could take a picture and post it to some bot, but then I couldn't talk to, I dunno, it was four or five people ended up helping identify it. And yeah, I, I think that type of isolation socially and professionally can is something that we should consider.
John: I have a question about the first thing that you said in the fact that this hasn't really impacted you. Are you using AI in your, in your development work?
Nic: So. Yeah, maybe we should start the show there. Maybe we should go around the, around the panel and say how we're using ai. So, I'll, I'll start again.
I, I guess this intro is gonna be very nick heavy. But I'll, I'll be brief. I use ai, I've been evaluating AI kind of since it hit the scene. I mostly used it through a chat in interface. I spent fair amounts of effort trying to see if it would improve my workflow, and ultimately came to the, the conclusion that, you know, chat pt, that type of style workflow wasn't generally very helpful beyond like, identifying some documentation or something that I could read.
I found most of the time that it, it provided, it, it was either net zero or net negative time. I've recently been hearing so much about agents I've been, I have a client that uses them fairly heavily. They offered me a quad code seat and gave me permission to use it on their, worked their code base.
And I've been I've been evaluating that, but that's been less than a week. I already have some pretty significant thoughts about it. I think I've identified some places it's useful. One,
John: one word answer, positive or negative.
Nic: Oh, I mean, you, you can't, can't say that. I mean, there, there's some use cases, but I don't think it's as in the middle.
You're, you're in
John: the middle. Okay.
Nic: But we, we'll get to, I think, agents in, in a bit, but yeah, so that's how I'm using it.
John: So, so I'm gonna shift to Scott. Really quick here, Scott, like, I'm assuming because you're the AI guy at Acquia, do they call you that, by the way? Because they should if they don't. I'm assuming you're using, you're using ai and getting back to the original question, I'm wondering like, is the independence that AI gives people scary to you?
Scott: Yeah, no. I use it all day, every day, both at work and personal things for art and music. Like I actually got into it with art and music years ago, before it even became applicable to workspace type stuff. Do you feel
John: like it stifles the creativity there?
Scott: No, absolutely not. I think it allows me to produce and make things that I would never be able to do, especially as an amateur in that area, right?
Music albums, video film you know, the fact that I basically am laying on my couch can now compete with a Hollywood studio, like in what I can produce now. I would say for the, the independence, I think the independence is a fantastic thing. You know, one of the best areas that we see for AI is filling in the individual gaps of people, right?
So. You see this pattern where somebody's an expert and they come in and they say, I tried to do my expert thing that I'm an expert in and it didn't do it as good as me. That is fine. Right? It really is to fill the gaps where you, you aren't as good as that. Right. And one of the things that I like people keep in mind in this is it's very easy to look at AI as this kind of like all powerful thing and really devalue the things that feel easy to us as humans.
But there's a lot that feels easy to us that humans, that's next to impossible for AI to do. And I don't feel if that's coming anytime soon. So really looking at where are your personal strengths and just general strengths as a human, figuring out where ai, it's just another tool, another collaborator that we have in the system can help build out what you're doing and do things that are now possible.
I also, you know, when we talk about. Independent and individual things. I also think we need to look at like what tools like this can do for people with disabilities, right? Anything we've done over the year, you know, transcription, reading, being able to have assistive devices, it is huge there. So really looking at the areas where it can help individuals and independent, I think is an amazing thing.
John: Catherine, what about you? What are your, what are your experiences with ai and, and do you feel like, do you feel like it's elevating your ability to be independent?
Catherine: Yeah, I mean, I use it every day and I, I use the ag agentic stuff, you know, I've got multiple agents in a loop doing all sorts of fun stuff and doing great things.
So I, I totally get where you're coming from, Scott. Like, there have been, there have been moments where, where it's done something for me. That I've sort of had on the back burner for like years and, and I just like send it a text and go, okay, whatever, do this, you know, then like, oh, it's done. Right? So there's, there's that independence does feel empowering.
There was definitely a moment in December when I realized, and if you guys, you guys might remember this, this was the, the genesis of me wanting to talk about this in, in this show, right? Where, where I realized that I could have gone a week without talking to anybody. And, and that scares me. So it is empowering and I totally get that, that feels very liberating.
And, and, but at the same time, it is also scary because if I'm, if I'm at a place now where I could have gone a week without talking to somebody, what is that gonna look like a year from now? And I think that we, we have to take deliberate actions, particularly if we're in a leadership role. We need to take deliberate actions to ensure that the, the things that make us human, the things that fulfill us around interactions with other humans, that those things are not lost.
And we are not historically great at doing that sort of touchy feely, esoteric stuff.
John: Hmm.
Catherine: So it's miscarry.
John: So, question and I feel like I know this, but I wanna make sure that I'm, I'm accurate here. You're a kind of a solopreneur, right? You're kind of working solo like on, on, on your own, right?
Catherine: Yeah. Yeah. I know. I mean, I have traditional like agency client and, and Nick, I'm in the same position you are. I do have a big client that absolutely won't let me touch it. Right? Yeah. And, and I, I cannot, I cannot feed their code based ai. I do not use ai. So there is a, a significant amount of my professional life where I do not use ai.
Where what I was just discussing is more in my personal projects, some side projects, not the, the bulk of my professional work, the bulk of my professional work is still very traditional coding because my main client is cautious, cautious, cautious.
John: Well, I, I guess my, my question was more around it, it if that exacerbated the, the isolationist.
Catherine: No, because even though impact though, I don't think so because, well, I mean, well, yeah, I think it, I think it would have to definitionally because I'm not going to work in an office. Right. I'm not, I'm not in a physical space
Martin: Yeah.
Catherine: With people. But with, with that, that has always been isolating for all of us.
If we work from home, that's not new. The reason I don't think that this particular thing in with AI hurt that, or, or, or made that more apparent is because I do work with a fairly large team. And so I, I could have, had I chosen to, you know, collaborated with other people. So if it, so it, I realized right, that I could have gone a week without talking to anybody.
I didn't go a week without talking to anybody, but I could have, and that's the, that was a bit of a scary revelation that it was
John: Right. Okay. I see, I see what you're saying. So you were like, you, you identified it and were like, oh, I'm gonna go talk to somebody because if I don't Yeah. Like it's, it's not gonna be something that I'm gonna be forced to do.
So let's, let's kind of dig in on,
Scott: sorry.
John: Go
Scott: ahead, Scott. It's a, it's a really important concept there, and I completely agree because one of the things AI can do, and this isn't AI specific, this is technology in general, is it can reduce friction and we just inherently reduce friction in our lives. But friction is really important.
It leads to things, it builds strength, right? So, you know. Again, it's not AI specific because we now have this with, you can order DoorDash and have the food come to your door and you don't have to go out or you know, I don't have to walk to the store. You have to make yourself go outside and go also and walk over a run.
And I do think that is a very real concern that we have. And again, the only reason AI is implicated in it is because of the scale of what it can do. But like friction and effort is what leads to like learning, building, all that stuff. And so there is a path here where, you know, we end up in, in Wally, right?
Like that, that, that's not well UNM impossible.
John: So something comes to mind here and I unfortunately it's a stereotype that like developers don't really like to talk to people and don't really wanna interact with others. And they like to be in their dark rooms and like coding, coding, whatever. They're coding, right?
Is there is there a risk there? Is there a risk that people that kind of gravitate to that isolationist you know, way, way of, way of being like that, that they're gonna say, oh, well now I don't need anybody to help me debug. I don't need anybody to kind of, go back and forth on a code review with me.
I don't need anybody to like debate code structure and things like that. I can just ask the AI and the AI will debate it with me and then, you know, now I can just be my own person doing my own thing. Like, is that necessarily a bad thing? Well, I mean, it feels kind of like a bad thing to me. That's why, that's why I'm asking.
Scott: Well, and that's what, sorry, I get to put my psychology background here to work. That's the reason why. Like are, it may not be somebody in a dark room or quiet or not participating because that's what they want may very well be the case. But if we go back to the Lego example earlier, right? You're an expert in Legos, I presume you kind of know the pieces, you're asking this question, but there's a whole world of people that wanna get involved in the, the community may have questions and it's, it can be scary, right?
Especially if you do have an engineering mindset and you overthink things. You critique, critique things. So to be able to ask something first in a safe way, get an answer and say, Hey, I wanna build upon this conversation, like this piece I have is actually rare, right? Or nobody can answer this question that I have on it.
So it is a good place. So there is a way that people that want to participate in more social interactions but have a difficult time with it can use this to increase that kind of collaboration.
Nic: I, I think one of the concerns though is that most humans are inherent. Most in most humans, you know, psycho, psychologically take the easy way out, right?
Most people, they'll be like, oh, what's this piece? Oh, there's this app that I can do. I take a picture, I have the answer. Great, I'm done. Right. They don't use that as a stepping stone to increase the collaboration. And, and I've seen that with this, with this particular form, right? I, I check it once a week, once every couple weeks to see what people are talking about.
It has been dead for months, weeks, because I think people are just using that app 'cause it's easier. They're, they're not using the app and being, Hey, look at this cool piece I found, how can I use it? They're taking a picture of it, finding what, what it is going like, oh, it's worth 50 cents. Oh, great. It, and moving on and, and, and not bringing it to the discussion.
So I, I think that's one of the, that's one of the insidious pieces of ai, you know, whether, whether it's useful or not, is that it? It. Prevents that back and forth. It prevents it like stack overflow, stack overflows. Mm-hmm. It's dead. There were, it's dead. There were cultural problems with Stack overflow long before AI came to the scene.
But Stack Overflow was still, if you could, would fight the bureaucracy and the culture and all that stuff was still one of the places where you could get deeply esoteric and understood answers to def difficult technical questions. It was one of the only places you truly could do that. It's dead. And, and it's because you can get those answers from ai Now, AI got those answers from Stack Overflow.
AI is never going to get another answer from Stack Overflow because it's dead. So any new things that come out you, you can't further expand on.
Scott: What
Catherine: are you saying? Stack Overflow
John: is dead.
Catherine: It is dead. It's dead. It's dead. If you look at, if you look at the statistics on it. If you look at the statistics on, yeah,
Scott: yeah.
But take this a step,
Catherine: sorry. Sorry.
Scott: Take this a step back though. Like it's not like stack overflow or this legal forum was the be all, end all of like human interaction. What did they replace? They probably replaced going to some local group in person or going down to a shop or asking your friend. Right?
So it may be that those are dying because they were a little niche of like this overall interaction that people could have and that you may very well see. That, you know, depends where you're located and what you have available to you. But like. The forum and AI can scratch the same itch, but that's not the same as going into a store or a local group or meeting with people in person to talk through these things.
Catherine: But ai, but, but Stack Overflow replacing the other thing, right? It it's just those were people were replacing people, right? It's people behind a screen, but there's still, there's still people, the people commenting, the discussion is having, it's still happening with people. So when we have, when we have sort of AI replacing, that's the different thing and that's sort of the heart of the issue.
And I think this, the, the Lego Forum was a great example. I'm glad you brought it up. And especially when we talk about like a new person coming on, I think it ties in quite nicely with my trying to look at this from the perspective of a junior engineer, which is. You know what? Because if I, if I think about my role as more senior, the stuff I do, right?
I am doing a lot of collaborative things. I'm doing the things I think, Scott, that you were maybe pointing to when you said there are things that AI cannot do. Right. A lot of my work is judgment calls. And, and Nick, I imagine yours is very much the same, and Scott, yours too, right? It's, it's deciding who I need in a meeting, who I should take a meeting with.
It's talking to which architect about which particular thing that we need to do. It's, it's making complex decisions about the path forward of what's going to be implemented. And no, I don't think that AI is particularly great at that LLMs by a, I am saying LLMs right now. I hope we all are on the same page that we're talking about LLMs.
I, I don't think it's great at that. And so for me personally, and I think for everybody maybe in this call it. It is less immediately shocking to us. I think that the person who gets hit with these sort of the Lego Forum examples and the Stack overflow example is the junior engineer, right? It's the, it's the person who is not in those sorts of meetings.
It's the person who's not making strategic decisions, right? It's just your, just your junior engineer fresh outta university has been working on something for a year or six months. And they were doing all that stuff that we did. Right. And because the friction, right, that, that AI removes from me is right.
The boiler plating the syntax recall the, the test scaffolding. That's what it removes. That's the stuff junior engineers do. Right? And that's the vehicle through which junior engineers bonded with and collaborated with senior engineers and learn to be senior engineers.
Nic: Yeah.
John: I, I I have a question about the Stack overflow thing, and we don't need to dwell on it all, all that much, but I'm just curious if, like, did AI really kill Stack Overflow or, well, hold on, let me finish the thought before you jumped down my throat.
Nope, we're done. Yes, it did. AI's the culprit here.
Scott: Yes. Go ahead.
John: Did AI kill Stack Overflow or did our culture of. Open issue queues and the fact that you can find answers to questions in, in those issue cues on GitLab, GitHub, and, you know, moving conversations to Slack and building of other communities.
drupal.org as as a great example, right? Like it to, to me it feels like, yes, AI probably was one of the factors, but as a, as a kind of development community, yeah, we moved more from, Hey, let's use Stack Overflow to, Hey, let's use our open issue queue. Let's talk about this in Slack in real time. Right?
Nic: So, so I, I, I don't wanna do real show too much with this, but I'll link blog posts, I'm sure a couple of us have read that talks about it.
And it's one of those things where, like I said, SAC Overflow itself was in decline starting around 2020 ish. Like it had a big bump during COVID then it, that didn't last too long, but it slowly was declining, right? Most of that's attributed to the culture, right? It's very, it was very difficult to get a question answered on SA Overflow.
Most of the time you open a question, it was closed as a duplicate in seconds, even if it wasn't because, because of the culture. But if you look at the graph of the number of questions asked, like as soon as Chachi PT was launched, it's precipitous, like within a year it went, it went down below the number of questions it had in its first month.
It was, it was definitively it was dying. Before ai, it was killed by ai.
Scott: Yeah. But it, it, like, if you look at their data, it stopped growing in 2014, and that's right when Slack hit the market. Right. So I think these like walled garden community, real-time communication fundamentally changed a lot of that.
Yeah. So it was already on the way, but yes, I
John: think, yeah, our, our culture of instant gratification was like, oh, slack will get it to me right away. I don't have to wait for somebody to read it and reply. But
Scott: there, there was also a human element to that and I think that's what come out of this. Like a lot of times you ask in Slack because you know those people, right?
You have some sort of connection to them. Whereas Slack overflow, I just wanted my answer. I didn't care.
John: So.
Nic: Well, I,
John: Scott,
Nic: thanks. Well, I think, I think also if we're looking at the number of questions asked too is like, it, it. It was steady at 2014. We hit saturation. It wasn't declining at 2014.
John: Okay. We, I don't think we need to over analyze Stack Overflow.
Nic: We can move on.
John: I'm gonna, I'm gonna let Scott make that nice segue into my next, my next topic here. So I'm somebody who loves to collaborate with others and it feels to me like, you know, there's a lot of AI intervention that's maybe curtailing that collaboration or maybe not curtailing it, but gonna change the way that, that teams collaborate and the culture around collaboration.
I'm wondering like how you guys see collaboration on development teams changing. Especially in when, you know, it could be be, it could be optional, right? Like, if I'm using an AI bot and like my AI does everything like collaborating with a, a co, a teammate would do, like, I don't need to collaborate with that teammate.
How is collaboration gonna change on teams, and do you think like organizations are going to like, have to implement forced collaboration rules of like, Hey, you need to have a person review this code or, or talk about this thing with you before you kind of ship it? Catherine, I'll let you, I'll let you go first.
Catherine: Yeah, no, I mean, I don't, oh God, this is a hard one, right? Because this is about organizational psychology, this is about team leadership.
This is about how do you, I I, my honest answer is I have no idea, right. What I know is, or what I think I know or what I feel we're gonna talk about feelings is that, is that there's, there's, there could be this shift from. The feeling of we built this right? We built this as a team to something more like I shipped this and if we start seeing movement towards, I shipped this from individual engineers on a team, as opposed to, we built this right as a team.
We built this together, we struggled together, we had the friction together, we banged our head against the deadline together. We, we complained about the, the product manager and the, you know, and the product owner and the, and together we, if it goes from, we built this to, I shipped this, then I think that that is, I don't know what collaboration looks like if that starts happening and, and I don't, I don't know how organizations.
Can enforce collaboration. Right? Like I, I don't know.
John: I I do wanna say that I think forced collaboration rules are a bad idea. Yeah. But I also to your point, you know, I think, I don't know, I mean, depending on the size of the project, I don't think you would ever get to a point of like, I shipped this, right?
Because your part is part of the whole, which is then the, the end product, right. Maybe on smaller projects that, or sole projects that might be, might be the case, but yeah. I think that's an interesting point. Scott, I'm gonna go to you next. Yeah. On the collaboration front. 'cause I feel like you probably got, got some thoughts.
Scott: Yeah. And it's interesting I, I do think it's something we wanna focus on and I wanna go back to earlier what Catherine said about like junior developers. Yeah. You know, early in this call, Catherine, you had said, we were talking about the field, which actions, and you said, but just 'cause we can add more buttons, we shouldn't Right?
Like. Current AI is never gonna tell you that. Never gonna say we shouldn't do that. Now, I assume you probably said that because through your path to get to where you are, you implemented some buttons, you built some things, and then you're like, this is a mess. This, we did a lot of work but really didn't make sense.
So there is a risk here, like if people aren't learning most types of things. But at the same time, it's like, I think we're gonna start doing bigger things and more ambitious things, and the CA collaboration and the human element becomes really important. It's not about spending 80% of our time heads down, writing some code, debugging, doing QA in isolation.
It is discussing what we should build in the first place. What problems are we trying to solve? Who are we trying to solve this for? Right? We, we already see this with ai, like a lot of where I see agencies fitting in the future and product development and all this stuff is. Trying to help a business owner or stakeholder that has some fundamental need and having a discussion about what are trying to solve and how are we trying to get there, then the actual building of it might get a little bit easier, which is great because software takes a long time.
It's expensive. There's a lot of like problems and it's, that part didn't necessarily add value to anything we're doing. It is solving the problems that we had. You see this already, it was interesting. Open AI on their codex. Open source of CLI, they released a a statement or instructions in their GI repo that they do not take unsolicited code anymore.
Right? If you want to contribute, what is important is helping define requirements. Well, thought through bug reports, strategic conversations, because writing the code is now relatively simple. If your requirements and your goals and your verification and the reason that you're here is well documented, so there's still a future of collaboration there for the team.
John: Interesting.
So Nick, I mean, I, I think like talking about collaboration, right? I, I I also wanna kind of talk a little bit about culture too, because in the pre-show we were talking about like the feeling of like, you know, this AI inevitability, right? And I'm wondering like how you think it's gonna shape, shape culture on teams and like, is it gonna, you know, if AI is coming and we, there's nothing we can do about it.
Like how, what does that look like?
Nic: Yeah, so I, I, it's a phrase that I think I've heard Catherine say the show before, and I've heard many, many people say, right, the train's left the station. We have to figure out how to cope with it. Good, bad, indifferent. And I, there's just something in the back of my mind.
It's hard to articulate. I'll try to, I, I read an article a while back that I'll try to find for the show notes, but it, aI is here to stay. That's true. Like there's no world where we wake up tomorrow and the technology is gone. Right. The technology's here. In fact, LLMs have been around for, for longer than two years or three years whenever Chat g PT came up.
I will say though that I don't think it's inevitable that AI sticks around as pervasively or as long term as it does right now. Right. If we, even if we just look at the economics of it. Right. And the, and the cracks are already starting to show,
John: oh, hold on a second. Can you clarify that for me? You don't think AI is going to stick around?
Like you're not looking at AI like the internet here, like it's going to be going forward forever.
Nic: Yeah, so, so like I said, LLMs, ai, some sort of company, some sort of service will be here. Right. But I don't think it's going to be as pervasive. I don't think, or, or I don't think it's inevitable that it's gonna be this pervasive.
I mean, right now it is, it is everywhere. It's in every single thing. People are changing their legacy products to say ai, even if they're not integrating ai, because AI has to be in everything. Right? But I don't think that piece is, is fully is fully settled. Like if you look at we talked a little bit about cloud code last week or the week before.
You know, currently you can pay 200 bucks a month for an account there. And if you look at heavy users, like higher than median average users, they're spending more than $200 in electricity just on prompts. Nevermind the amount of energy that it takes to build the table, inference tables and the training.
Right. So from a, and, and yes, new chips will bring that cost down. Maybe new techniques will make it cheaper to cheaper to build the training dataset and that kind of thing. But all science that not happening right now. So the fact, or you look at the amount of market capital that Microsoft lost, they lost $500 billion, what, two weeks ago.
You look at chat, GPT, they, they wrote contracts to purchase $1.4 trillion for the data centers by 20 20, 20 30. And this week or last week, they just brought the, those numbers down to 600 billion. So less than 50% of what the original commitment was. You look at Nvidia, NVIDIA agreed to invest a hundred billion dollars in OpenAI.
And three weeks ago they changed that to, oh, no, no, no. That's the, that's the upper limit. Right now we're starting with 10 billion maybe. So there's, there, there's signs that a lot of the investment and excitement that was going into AI started to cool off. If you look at the consumer usage, yeah, there's some people that are very heavy users.
There's some people that found it great in their workflow, but in general, consumers aren't adopting AI at the rate that people thought. Even commercially, like if you look at copilot now, copilot might be a bit of an edge case, right? It's not the best tool out there. But Microsoft has kind of a built-in customer base for something like that, and copilot has like a three or 5% adoption rate.
So it there, so I'm not saying that AI is gonna go away. We're not gonna hear from it again, I'm not saying that AI doesn't have uses, right? It's an LLM. LLMs have massive, like it's good at transcription, it's good at searching, it's good at indexing. There's things that it's good at. But I don't think, I, I don't think it's necessarily settled that it's this.
Industrywide worldwide change and that piece is inevitable. We don't know that these problems will be solved. We dunno that the en energy crisis will be solved. We dunno that. People, you know, if, if cloud code becomes a thousand dollars a month, how many people are actually gonna use it? Yes, there are some people where that will still be worth it, but will there be enough for there to be a viable business?
I
John: mean, I gotta kind of feel if the business, the business value outweighs the cost, like development hours go down and we pay a thousand dollars a user a month, month, whatever it is, then like that, that somebody's gonna do that math and go, yeah, it may Yeah. Give this company a thousand dollars and like, we're gonna make three or $4,000.
Right.
Scott: Yeah. What do you get stuck? It brings up a huge risk too, right? Because, you know, I have the fancy expensive accounts right now, and if it went to a thousand, I'd probably begrudgingly still pay it. But there is a, a potential inequality gap that's coming there mm-hmm. With those that are gonna afford these tools.
Absolutely. Right? Yeah. It's gonna hit the, it's gonna hit the juniors right. The fact that like, I could, I wouldn't want to spend $2,000 a month on something like this, but like I could make the case and show the value there. That builds a huge gap for anybody who can't do that to even
John: Scott. Can I, can I clarify something really quick?
And you can choose not to answer this if you would prefer, but I'm, I'm assuming that your employer is paying for most of your AI accounts.
Scott: No, not, not the ones I use for like personal art and stuff like that.
John: Mm-hmm. Right. So that's why I said most, but like,
Nic: oh, well, and that, that's a question too, Scott.
So you might make, would you pay that for your personal, your personal use cases? Would you pay, because, you know, music generation, video generation, all that stuff, is orders of magnitude more expensive than code generation? Would you pay a thousand dollars a month for that? How you use that? Would you pay $2,000 a month?
Scott: Yes, it would depend, especially if like, if I was looking for employment or trying to like compete in the marketplace a hundred percent. Mm-hmm. Right? Like there very is likely a future where you can't even compete to like contributing to an open source product project, things like that. Like without these tools, like it gives you a step up.
John: Yeah.
Nic: Well that, that, that's the other point. Sorry. Sorry, John. That's the other place where I think the cracks are showing though, right? For the first, I mean, GitHub was built as, I mean, yes, it was built as a business and it was built to host private code, but one of the leading forces of adoption for GitHub is the fact that you could have full requests like that is, that is the quintessential feature of GitHub.
And what has AI done with, like, GitHub released a feature two weeks ago where you can shut off prs, prs, like, it's mind boggling that a feature request is, I wanna be able to shut off PRS in general because ai, so like, I don't know that. Using AI to contribute to open source really on its face is, is is,
I am struggling to find the words.
Scott: Yeah. Well, I, I see what you're saying, but like, but why not? Like what value did pull requests provide us? Like, yes, it solved a problem that we had with multiple people working on code, but it's not like we woke up one day and we're like, I need pull requests in my life.
That's what will make me happy. That was just a stepping stone. Collaborative.
John: It was a, it was a, it was a validation, right? Like, Hey, junior developer, put your code into this project. A senior developer's gonna review it to make sure you didn't do something dumb. Right? So like, I mean, that's essentially, in my opinion, or the, or the maintainer.
Or the maintainer or whoever, there was some sort of oversight, right?
Scott: It doesn't get rid of the oversight, but to me it's the same as putting gas in my car. I need it for my car to work, but I don't really care and I prefer if I didn't have to. Right. If you could move that to the function of the pull request and the collaboration and the human review to the discussion ahead of time, we've all been through this.
Like, somebody develops, develops, develops, they deal a pull request, and then the conversation is, you didn't even solve the right problem. This code looks great. Yeah. It doesn't actually solve what we set out to solve.
Catherine: No, I agree with, I, I think, I think that Scott is, I think that our, our previous question, Scott, you said something that really rang true for me, right?
Which is this, this idea that a lot of. What the LLMs can do well is the grunt work, the coding work, the stuff that maybe didn't have a huge, it has value. It has value because there was no other way to do it. Right. And now if there's a, if there's a, a robot that can do it for us, and we're focusing more on the collaborative, the, the discussions, the, the refinement meetings, the creating tickets correctly, writing your user stories correctly.
If we are focusing on that and we're getting our, actually, oh my God, this is like frigging waterfall planning, isn't it? It's like the, it's like the perfect idealized waterfall, you know, planning where we, where we completely get all of our, our scope correct and we get everything defined correctly, and we get exactly what we want defined correctly before anybody writes any code at all.
Right. It's, we replace the word anybody with before we pass it to the LLM. Yeah. And if, if we're in a world where. The humans are doing the front work and the LLM is doing the back work, you know, the, the, the coding work. After we've done all of this, then do we necessarily need pull requests? Did pull requests solve something that, and, and again, I, my, my response is, good lord yes, I need pull requests because I don't wanna just do putting stuff into my code base.
But this is what I mean, like, it's such a, the, the possibilities of what can happen now are so extremely different than the world that we as engineers within the tech community have created. That it is very difficult to wrap your head around what we do, right? What do we do in four years? In five years?
What does our job look like? How are we working with other people? It's
Nic: okay. I need, I need to pivot for a second 'cause we have a psychologist and a philosopher on, on the show. I need to know what is it that makes people so willing to spend five hours writing up requirements. For an LLM when two years ago, you couldn't get them to spend five minutes write those same requirements for their engineers.
Why? I don't understand. I guess I do. Is
Catherine: it because it's a new toy, Scott? I mean, that's a psychology question, right? And that you're, you're onto something. Because I will say that the, the user stories that I see coming out of the product owners that I work with who are now working with LLMs are orders of magnitude better than any user story I had seen previously.
I think I'm just now willing that they might be just now realizing they might be using an LLM to create them.
Scott: Yeah.
Catherine: But that's, yeah,
Scott: I think that's exactly it. Like, so like the reason we were unwilling to do it, it's not that we didn't see value in that. Like it was hard. You had to actually sit down and type it out.
But now you can get on a call with people, have a conversation, take this transcript, ask an LLM to put that into a user story and acceptance criteria. Then spend your personal effort reviewing, critiquing it and saying, Hey, that doesn't actually solve the problem. We have revise this, revise this, revise this.
You know, we, in, in, in the past, even if we wrote a story, like you may get to the end of it and be like, oh, architecture, this is wrong, but I'm not gonna rewrite this whole thing because I just spin out.
John: It's also
Scott: right.
John: It, it also to me feels like the input format. Right. So like, I'll give you, give you an example, and Nick's gonna cringe at this, I feel, but whatever it more, more of it's probably coming, right?
So, last week's show notes for the, for the show. I was like, huh. Like, and this is, I'm so behind the eight ball, I'm so like late to the game, but like, I was like, oh, you know what, I'm gonna talk to chat GPT. And you know, maple I think was, was my, my GT's name, right? And I was like, all right, I'm gonna have a conversation.
We're gonna, we're gonna make the, make the show notes, basically just come up with the questions for the show. So I was like. Hey, maple, like I need, I need 10 questions for a podcast for a Drupal podcast. You know, we're talking about mar marketing automation with modic, blah, blah, blah, blah. Like, and I was like, come up with 10 questions, and it was like, okay, here are 10 questions.
And I'm like, okay, now we need to like, ask a question about this and, and reword question number two to focus on this. And like, by the end of it, I had 10 questions that I was like, all right, these are, these are pretty good questions, right? That form factor to me feels a lot easier for people than to have to write or type.
A user story, right? Yeah. If somebody could go to go to chat gt, or even if they could go to Jira and use Jira ai
Scott: mm-hmm.
John: To write their user story, like I'd much rather talk to a, an AI or, or whatever and say, Hey Jira here I need a user story. For a web form. It needs to have these six fields. They should be required, this, this, this, put it into a user story user.
But
Catherine: then we get back
John: to collaborate format. Right?
Catherine: We, we get back to the point which is collaborate to the point of the share, which is collaboration. If you are doing all of that, if they're doing all of that in Jira right. In the past, what that might've looked like is you said, oh, I need to come up with these 10 questions.
I don't know exactly what I wanna ask. Let me ping some other people. Let me see what they think about it. You know, the RR are. Our product owner here, product manager, putting it in Jira. In the past they would've maybe gone to their UX person, they would've maybe gone to an architect and asked, they would've maybe talked to one of the senior devs on the team.
They would've, yeah. They would've gone to, you know, stakeholders. Right. They would've gone stakeholders to write that user story and now they don't. So, so we're, we still have this collaboration?
John: No, they did not, I don't think they're not gonna stakeholders because after I get it in there, my first step is gonna be tech lead.
Does this look right? Client Does this look right? Okay. It does great. Now, okay. I, my point was that essentially my point is reading and writing are hard, and if you can just talk to a thing and it will do the writing for you and then read it back to you, then it makes it a lot easier.
Nic: I, I just wanna chime in that, that, that's one of the really big takeaways that I realized.
And like I said, I, I've been using cloud code on a particular project for. A week. And one of the things that I realized even after the first day is one of the strengths is identifying things. It did, I asked it to review a particular service and identify performance, possible performance improvements.
Right. And the things that it identified, it ranked them, you know, critical whatever. And, and those were generally correct and a couple of 'em were things that we, if we had looked at the code, you know, it's, it's a legacy thing. So if we had looked at it in the last five years, we probably would've realized, oh hey, yeah, we should do this differently.
Right. But what I noticed is that these tools are less engineered to be correct or right than they are engineered to seem easy or seem low effort because the thing that I did, so I spent the way I told this story to when I was talking to, I dunno if I. I was, I was talking to somebody about my experience.
I was like, yeah, it took me 15 minutes. I pointed out the service and identified these things. I fixed. I decided to fix all of them. I did a one by one, I did some code review. I modified a couple things that it did, committed them one by one and put it in and it was quick. Took 15 minutes. Whereas, you know, in the files it's, it's probably too big.
800, 900 lines could be broken up a little bit. And I kind of thought about it, and then I sat back and was like, okay. I went to write up my report and because I'm trying to be pretty methodical about how I'm using this and realized that it didn't take me 15 minutes, it took me an hour and a half and I was like, how did, like I've only been doing this for like 10 minutes, 15 minutes.
Like, how is it possible? So, and it did identify them faster. Did it it, it fixed them much in the same way that it did, and it saved me maybe. Seconds and tight, because it was like, some of them were just about adding something to a particular cache that could be expired. Right. That's five lines of code.
I found that architecturally it, it did this thing where, and this is one of those things that Juniors will learn to do it the way. Hmm. I have so many points I wanna make right now. Mm-hmm. I'm struggling to make sure that I'm, I'm making them a way that that's easy to parse, but I, I, I promise this brand is going somewhere.
How does, see, how do juniors learn if they're just you, you taking the output that's coming, how are you gonna collaborate?
Scott: Mm-hmm.
Nic: I was listening to, I was listening to a, a linguistics podcast last night and they said something interesting to me that, which is that, you know, we spoke, it was a
John: linguistics podcast.
Nic: Yes, yes. So analyze, you spell it one way in America and one way in Britain, right? That's not true. In Britain, the Oxford d dictionary recommend says you can spell it both ways with an S or with a z, and that they recommend the Z. But when they were, when Microsoft was writing the spell check, they couldn't deal with the ambiguity.
And they decided that since they wanted consistency per document, that the British English would spell to s. And now people my age that grew up with spellcheck think that in Britain you just use an S, not a Z, because that's how it is. And that's not true. The tool defined the pattern. And LLMs, for better or for worse, are starting to do the same thing.
So for example, I noticed in this file it added a static variable cache in four of the nine things that I identified. Static variable caches are sometimes an acceptable solution, I would say. Very, very, very rarely are they an acceptable solution because they end up, you end up with things that you can't regular, like you can't readily replicate or test because it's, it's a static variable.
And the fact that in one file it analyzed it and came to that solution four times is, is, is problematic. But that will become the way that people solve these things by default because the tool is recommending it.
Catherine: Yeah. So what is this? So what is, okay, what is this collab the collaboration bit, right?
Let's pretend like you were a junior, right? And you, and you used Claude to do this and you, you, you're. Well, I guess the collaboration right now is coming hopefully in the code review process, right? Because even if you use it to write that code and you put it up and you do a pull request and you have a senior dev look at it and the senior dev comes back to you and says, Hey, you, it used it here.
You know, this is, this is what AI did and this is, let's talk about why it might do this and why it shouldn't do this. And so I guess in that sense, there's, there's collaboration, but I wonder if that whole, going back to talking about pull requests, right? And if we need them, what does that look like in three or four years?
Does that happen?
Nic: But how do you, how do you internal
John: Scott, why don't let Scott, why don't we let Scott get in there? 'cause he's been trying to, he's been trying to,
Nic: was gonna ask this three times. I was gonna ask this question. I was gonna ask this question, Scott. So ask
Scott: a question.
Nic: If this goes back to something you said earlier, you know, friction is what breeds that.
The learning, right? And so, if. A junior solves something using LLMI review it, go back to 'em, explain this is why we don't do it this way, this is how you should do it. Their workflow is probably just gonna go back to the LM and find a way to prompt around it, but they're not internalizing. That's like reading a language book and be like, I'm learning Italian but I'm not down.
I think you're
John: assuming they're not internalizing it.
Scott: Noize something else. Right. So I mean the, we can tie this back to dribble and, and, and here's a way of how I work with these tools and how you get around these gaps and how it can help both with collaboration and yourself. So I noticed using tools like cloud code, you give the task and it will solve it.
And a lot of times it's writing custom patches, it's doing all this stuff that we said you shouldn't do. Right? And we know this 'cause we've been through this for years. So what I instructed, and this is important, you have to instruct the tools to do this. I said if you run into a problem or a bug. First want you to look at the Drupal issue queue, right?
And I want you to find if anybody else has reported this, if they have, I want you to test what they've said and then report back to me if this is valid. If not, I want you to prepare a new issue that I can file, right? And explain to me what you did. So at the end, now, once it solves it, some of the times it will have solved it and has teed up for me the ability to mark a issue as tested or to provide some feedback, or it will tee up an issue for me at that point.
That's where, again, I can bring in the human value to say, okay, here's what is recommending, but I can see where it's recommending this only in this narrow use case, and there's a bigger implication. Or I read through it, I post the issue queue, and somebody else comes in with a different perspective on it, potentially driven by their agent, and we have that conversation.
So it really is, like you mentioned before, about. They will just solve this in a very easy way, right? The agents will, but you have to kind of ask it and say, there's a bigger purpose here. There's a bigger task here than just solving this instance on my local environment. Like what is the bigger problem set?
And where it really comes down to is the things like agents are always gonna outperform us at things that are verifiable for them, right? Like they're gonna work all night and they're gonna be able to verify it. But the things that are truly verifiable and have a real concrete answer are very fin in the world, right?
There's a lot more ambiguity and questions there. So tee it off to, or set it up, tee off to you. Those parts where the ambiguity and the human parts, again, where humans are really good are important. And that will lead to that collaboration and, and conversation. And we spend our time doing that then. I mean, honestly, half my career was probably just setting up local dev environments because they were, I don't wanna spend my time doing that.
I wanna spend my time solving problems.
John: So we, we, we got like 13 minutes left in the, in the show here. And I wanna get to these last two questions 'cause I think they're, they're super interesting. So I'm gonna ask you guys to limit your answers to like two or three minutes. Nick, I'm looking at you. And we're gonna, we're gonna jump into these.
I'm wondering if like, so we, we, we now live in this AI world. We've talked ad nauseum about it over the last, you know, hour plus, right? I'm wondering. If what your opinions are, are we losing something meaningful when we're, when we're working with ai or are we just romanticizing the struggle of, of doing things the hard way?
Like, are we, are we like, are we being the old get off my lawn guy where it's like, no, I'm gonna code this thing myself and, and I'm gonna, I'm gonna do it because I'm gonna do it. As opposed to just being like, Hey, ai, build me this thing. I'll go,
Nic: I, I have a strong, I have strong thoughts on this. Bob,
John: remember you got three minutes.
So, Catherine, I'm gonna let you, I'm gonna let you go first. What do, what do you think?
Catherine: I think that we might actually. Be on the cusp of losing cognitive endurance. And I say actually, because we've all heard this a million times before, right? The, the internet is making people unable to think well deeply for long periods of time. TikTok is making people lose the ability to think well deeply and for long periods of time.
And there has been some research. I don't know, Scott, you probably have more insight into this than I do. That suggests that there are some cognitive consequences to, to some of these technologies. I think that in our specific scenario, right, and and in particular in the case where I'm thinking of training up a junior developer, I think that, I think that we could be losing something real.
Which is cognitive endurance, right? It's training the mind to think about coping problems. Their context window.
Nic: Their
Catherine: context window is getting, getting smaller. Is getting smaller. I do also think that we're romanticizing some of it. I do think there is some get off my lawn. I, I do enjoy the fact that, that my main workload, my, my, what I'm paid to do does not allow AI yet.
And, and so I, I like the fact that I can go back to that and, and can bang my head against the wall. I like the fact that I still have to open up my snippet library, right? Like there'll be a day when I don't have to open up my snippet library and that'll be sad. I like my library. You say,
John: I did, you put the snippet in aa put the snippet in here.
Right?
Catherine: Like I, I've spent a lot of energy building up that snippet library and now it's worth nothing. Right? I've got a decade or more in there and it's, so, I, so yeah, I think there's some get off my lawn. I think that cognitive endurance. Is possibly an issue that we could be losing. Yes. I think we could be losing some real things.
I think we are romanticizing some of the struggle.
John: Scott, what about you?
Scott: Yeah, I mean, I would agree that the cognitive endurance is a a risk, but it really depends on how you look at and use ai. Like yes, theoretically we may get to a point where it is just inherently better at us and outperforms us, and we just ask it like it's some sort of oracle.
I don't see that coming anytime soon. But within that, it comes down to how do you look at it? Like do you ask a question to ai and you just take that as the truth, which I see some people doing. We've all been, you know, in a conversation somebody has just pasted in, here's what chat GPT said. It's like, I don't care.
I could have asked it myself, but if you're using it as a like, well, I don't know, debate, disagree, going back and forth like. Researching, taking different perspectives, you are still using those tools, right? So I don't think that's gonna go away. And I also think that humans will inherently do this. Like if we think about playing board games and doing puzzles and CrossFit puzzles, like we like that type of activity, we're going to naturally do that.
Now it is easy to get pulled into TikTok and Facebook and all that, and that's just a general technology thing, right? And you can say the same thing about cars with athletic endurance. Like you have to structure your life in a way that it's gonna become easy to do these things that are important to you.
Like go camping, go outside, do that stuff. Like it's important. You can very easily just sit inside and not have that stuff. You can just not think and let AI pick the food that gets delivered to that is a possibility. So it does take some like, and I, I. I worry about that question in regards to like, my kids, like what are we training them?
You know, my, my wife always makes fun of me that I can't get anywhere without Google Maps. Like, I've lost my sense of direction. I never had a good sense of direction. But what do we see? Like, what skills don't you learn or need? 'cause you don't have that and then you miss out on something fun and interesting.
John: Nick, I'll I'll give you, I'll give you the last word on this one.
Nic: Yeah. So I, I, I agree with some of the things I'm hearing. I don't agree with everything, but I, I think the fundamental thing is that you, the brain tries to conserve energy and so you only learn things that you use, right. And you only actually learn the things when you spend the time going through the hard, the hard part of it.
So, I, and I'll give two examples. One is, and, and I think in some cases it's fine, in some cases it's not. Right. So your example right now, Google Maps versus being able to get across town like yes. That is, that is true. Like I used to be able to get a, if I was somewhere new in town and had to get somewhere else in town, I'd be like, oh, I gotta go here, here, here, and here.
And now sometimes if I try to do that, it takes me five minutes where it used to take me five seconds. It, but we have phones, we have Google Maps. It's not a it's not a fundamental problem, I think to lose a skill like that. And it's also something where I personally, I grew up not having maps or not having Google maps.
And so that's a skill that I could build back up. When it comes to the fundamentals of something though, you have to go through that work, like speaking a language you can't learn. There's a, people sometimes confuse understanding with learning, right? You can, if you take where my wife and I are learning Italian right now, you can open a book that says, you know, the word for cheese is for magio, right?
And I understand. And you think, okay, great. And, and maybe you remember that, but if you're not like sitting there doing exercise or use or trying to make sentences in the new language, your brain will not internalize itself because it takes a lot of energy to maintain language. And it's the same thing.
Like if you're born speaking Spanish and you don't speak Spanish for 15 years at all, you'll, you'll still remember some concepts, but you will you be fluent still? Probably not. And programming is language. Programming is language. And if you don't build up those fundamentals, yes, you can learn tools around prompting, but you can't learn whether self taught or not.
You can't learn the structure of a language. You might be able to look at something, but you can't internalize that. You can't teach that concept to somebody else because your brain hasn't put in the energy to go like, okay, I've looked at this pattern four times now. It's obviously important. I need to remember it.
And I think AI is something that's good for those gaps, right? Like if there's if there's something that I wanna do that I'm not an expert in, like I, let's say for example, I wanted to build a machine learning tool to play Super Mario Brothers, right? It's not something I wanna really, I, I just wanna get to the point where I have something running that's running against my game and can beat it.
Like, I don't care about the internals, I don't care about the maintainability, I don't care about the principles. I really just wanna see a computer beating that game. AI is great for that kind of thing. If I have something I have to put my name to that I'm releasing, I'm liable for. I mean, if, if Claude comes in and says, Hey, we'll pay for your liability insurance and, and, and cover that and take responsibility for the code that our bot is writing, may, maybe that's something we could do.
But if I'm responsible for it, I have to understand it. If I need to understand it, I need to remember how to code.
John: All right, so with our last couple minutes here, we're gonna go, go lightning round. Keep your gear answers to short, short answers maybe like 30 seconds to to a minute if possible. And this one could only, it could just be one word.
What do you think is more dangerous right now? AI or social media? Scott, I'll let you go first.
Scott: I don't think they're different. I think they both just reflect human nature to some extent, and they can amplify things and that can be a problem or that could be hyper beneficial.
John: All right. Good answer.
Catherine, what about you?
Catherine: If you're gonna make me choose one to lose, I would I would lose social media.
John: Yeah, I could agree with that. Nick, what about you?
Nic: I mean, they're both, they both have significant benefits, significant problems. I mean, they can't pick one. I mean, we need both to be fixed.
John: Fair. Fair. Okay. Let's talk about human skills really, really quick here. And, you know, AI's taken, ta taken a lot of, a lot of the, the heavy lifting off of off of our plates. What do you think? Like, what do you think the most important uniquely human skill becomes for an individual? Like, what's the most valuable skill somebody could have?
Say you are hiring, hiring a developer right now. Tell me Nick. Catherine, go ahead.
Catherine: I'm sorry. For me, I mean, I think communication, right? I mean that's what, that's what I'm looking forward to really in teams now is the ability to communicate well. The, with various, the stakeholders. I think, I think we are very quickly losing the days where the engineer who sits in the box and codes and is a little bit grumpy is a viable team member.
John: Interesting. Scott, what about you?
Scott: I'd say the, the team perspective of different types of intelligences, and I would include, include AI in that with humans. So if you think about the spectrum of human cognition and how people can think differently and have different perspectives, different wants and needs, and I think that together is the important friction point.
Right. That leads to something good.
John: Nick, any final thoughts?
Nic: I think it, when it comes to coding, most important human aspect is liability.
John: Liability.
Nic: Who's liable? Who's answer? Who's responsible for this thing that's being put out into the world? Who's putting their name to it? I think that becomes the most important thing.
Like I, you can't, you can't detect if somebody is using ai, so you have to trust their reputation more than ever.
John: Stay
Scott: tuned. That's the same. Legal healthcare, engineering, same. That that will be one of the saving graces for humans and be involved in the loop is who you can. Yeah.
John: Stay tuned next week, folks, when we talk about AI and the law.
No, I'm kidding. This conversation has been great. I wanna thank all of you for joining us. Scott, thank you specifically for, for joining us to, to to talk today and hopefully we'll see you again soon.
Nic: And thank you for joining us for the last four weeks. Kathryn, it's been a pleasure.
Catherine: Thank you everybody.
It's been great. A good conversation.
Nic: Do you, do you have questions or feedback? You can reach out to talking Drupal on socials with the handle talking Drupal or by email or show at talking drupal com. You can connect with a host, the listeners on the Drupal Slack in the Talking Drupal channel.
John: Do you wanna be a guest on talking Drupal or our new show TD Cafe?
Click the guest request button in the [email protected]
Nic: and you can promote your Drupal community event talking Drupal. Learn [email protected] slash td promo.
John: Get the Talking Drupal newsletter to learn more about our guest hosts, show news, upcoming shows, and much more. Sign up for the [email protected] slash newsletter.
Nic: And thank you patrons for thank you patrons for supporting talking Drupal. Your support is greatly appreciated. You can learn more about becoming a [email protected] and choosing become a Patron.
John: Alright, Scott, if folks wanted to get ahold of you talk about ai, talk about all the things that you're doing where and how could they go about doing that?
Scott: I'm in the Dral Slack, the Dral issue queues. I would love to hear, so as I've become more involved in the Dral AI initiative, like things people are running into, I am a firm believer in things that we do to benefit AI will benefit humans in general. So even if you are very anti or reluctant to use ai, still bring them there.
I'll be at Triple Con, would love to hear to talk to people, and then LinkedIn as well, where I post all sorts of AI related nonsense.
John: Cool. Katherine, what about you?
Catherine: Triple slack, that's the easiest place.
John: Awesome. And thanks again for joining us for the last four weeks. Feel free to come back anytime.
Catherine: Thank you for having me.
John: Nick, what about you?
Nic: You can find me pretty much everywhere at nicxvan N-I-C-X-V-A-N
John: And I'm John Picozzi. You can find me [email protected] on the social media dral.org at John Picozzi, and you can find out about mcom.
Catherine: If you've enjoyed, listen, we've enjoyed talking.
John: Great. See y'all later.