August 14, 2023
- Episode 361 Credit
- Most recent changes
- How should project maintainers respond
- How should the community respond
- Who developed the policy
- Who is responsible for enforcement
- How do these policies help maintainers
- Anything missing
- Future updates
Module of the Week:
- Brief description:
- Have you ever wanted a simple way to view and store information about the overall health of your Drupal website? There’s a module for that!
- How old:
- Project originally created in Sep 2007, current project looking like it took over the namespacex in June 2023
- Versions available:
- 1.10.0-alpha11 works with D8 and above
- Currently very actively maintained, last release was in the past couple of weeks
- Does not have issue enabled, project page says to open issues against ox project, which currently has no open issues
- Usage stats:
- 3 sites
- Jon Pugh, a founding member of the Aegir project, among many others
- Module features and usage
- The Site module stores information about the health of your site in a fieldable, revisionable entity
- Provides a detailed history of the state of your site, including changes to configuration with a log of who changed what, where
- Will include data on Drupal and PHP version, Git information, and more
- Health can be based on the core Status report, the Site Audit module report, or a custom SiteState plugin
- Can display an overall status indicator in the toolbar, so as a site owner or maintainer you don’t have to go to the Site Status page to see it
- That page will display more detailed information, including the “reason” for the current status, the site’s history, and more
- Integrates with the Site Manager module (also by Jon Pugh) which provides a UI for monitoring and managing a portfolio of Drupal sites
- You can try out Site and Site Manager as part of the Operations project (machine name ox) as a Lando-based local setup of four sites, of which one provides a dashboard for the other three
This is Talking Drupal, a weekly chat about web design development from a group of people with one thing in common,
we love Drupal.
This is episode 411, D.O. Issue Etiquette.
Welcome to Talking Drupal. Today we're talking about D.O. Issue Etiquette with Tim Lenhen. Tim is the CTO of the Drupal Association and leads a team of engineering staff that are focused on empowering you out there to better be able to contribute to Drupal. Tim, welcome back to the show and thank you for joining us. Thanks for having me back. It's good to be here.
I'm John Picozzi Solutions Architect at EPAM, and today my co-hosts are joining us for the fourth and final time, Tim Plunkett, Engineering Manager at Acquia.
Tim, hopefully you've enjoyed your time with us here. I really have. It's been great. Thanks for having me. Absolutely. We'll say this again at the end of the show, but you can come back anytime you want now. You have an open invite.
Also joining us, as usual, Nic Laflin founder at Enlightened Development.
Happy to be here. Love to hear. I mean, we teased a little bit of this stuff last week with our off-the-cuff, so looking forward to diving in a little bit more detail.
It's very rare that we're able to tease a show before we know what the show is going to be, but like magic, magic.
All right. And now to talk about our module of the week, let's turn it over to Martin. Anderson-Clutz a senior solutions engineer at Acquia and maintainer of a number, a great number, of his own Drupal modules. Martin, what do you have for us this week?
Thanks, John. Have you ever wanted a simple way to view and store information about the overall health of your Drupal website? There's a module for that. It's called the Site Module, and it was originally created in September of 2007, but it looks like the namespace was taken over in June of 2023.
It has a 1.10.0 alpha 11 version that works with Drupal 8 and above, and it's currently very actively maintained. In fact, the most current release was released in just the past couple of weeks.
It interestingly does not actually have issues enabled. The project page says to open issues against the operations or OX project, which currently has no open issues. Now, the Site Module shows as being in use right now by three sites, and it is maintained by John Pugh, who is a very respected Drupal contributor and a founding member of the Iger project, among many others. The Site Module works by storing information about the health of your site in a fieldable, revisionable entity, and that allows it to provide a detailed history of the state of your site, including changes to configuration with a log of who changed what and where. The information will include data on Drupal and the PHP versions, get information and a variety of other factors, and the health can be based on the core status report, the Site Audit Module report, or a custom site state plugin that you can sort of create for your individual use case. Now, it can display an overall status indicator in the toolbar, so as a site owner or maintainer, you don't have to go to the Site Status page to see it. That being said, that page will display more detailed information, including the reason for the current site's history
and a variety of other sort of more detailed information.
Now, it will also integrate with the Site Manager module, also by John Pugh, which provides a UI for monitoring and managing a portfolio of Drupal sites. You can try out Site and Site Manager as part of that operations project that we mentioned earlier as a Lando-based local setup of four different sites of which one is sort of the dashboard for the other three.
So, let's talk about the Site Module.
Yeah, it feels wildly helpful for folks that maybe are in an agency atmosphere or run some sort of hosting setup where they need to monitor a bunch of different Drupal sites.
I think that first feature shout-out you mentioned in terms of being able to audit the history of configuration management changes in particular stood out to me as something I could see a lot of people wanting to have access to.
Yeah, no, that seems pretty neat.
And it sounds like operations is sort of the start of a suite of kind of modules related to sort of site management tasks in general, which is going to be cool. That's worth following along. I think you said there's only three sites using it at the moment, so like getting or at least three reporting back to drupal.org. So, it's like the beginning of the bandwagon, the future of hopefully something great.
Right, yeah, particularly being this new, it's probably not surprising that that number initially is on the low side, but yeah, I would definitely expect to see that grow pretty quickly.
Yeah, one of the things that I'm really interested in because I actually have a few clients with a number of modules that are, a number of sites that are looking to monitor them, but not all of them are Drupal. So, if there's a plugin for, it sounds like you can write a custom plugin to kind of ingest whatever information you want.
This might be really helpful for also just reporting statuses of other sites in the system on the dashboard side. So, I'm definitely going to be digging into that. I've been following John's work for a long time. You know, I think one of my first ever public talks about Drupal stuff where it was related to Iger. So, it'll be good to dive into something he's been working on again.
Yeah, I see that John is also one of the maintainers of the site audit module, and that's definitely one that I've used quite a bit in my own Drupal career.
Yeah, it's one we're even promoting as part of the suite of tools we're recommending to folks who are still needing to do their Drupal 7 migrations and looking at everything they might need to get ready, all that kind of stuff. Tim, there are no people still on Drupal 7. That's a misnomer.
Oh, boy. If only.
For those of you not aware, that was a joke.
Don't tell people that.
Cool. I look forward to hearing more about the site module and kind of John's suite of tools that he's building out.
So, Martin, as always, thank you for another great module of the week. And now on to our primary topic. Tim, in talking Drupal 3.61, we talked with Dr. Matthew Tift about the Drupal credit system.
There we talked a little bit about the gaming and misuse of the credit system in IssueQ. I feel like that was about a year ago, over the past year since that episode. It's been even that long. Oh, my gosh. I know. I looked at the date, and I was like, "Oh, wow. That was almost a year ago now. Okay. Time gets weird when you podcast." Because you're always like, "Oh, that was only a couple of shows ago." Nope. Surprised.
But I was wondering if you can tell us a little bit of the backstory
that got us to the current updates from the past year, say, of what's been going on with issues in the credit system and kind of how we got to what we're talking about today. Sure. Yeah. So talking about contribution in Drupal, of course, is always one of my favorite things. It's kind of the primary goal of what I do at the Drupal Association is giving people tools for contribution.
And to go way back to the beginning, in roughly 2016, we first created the contribution credit system, which most folks out there who do any Drupal contribution are hopefully familiar with, but maybe some newer listeners or folks who have not yet been able to contribute might not be so familiar with. And the idea is it lets you attribute any of the contributions you make in the issue queue to say, "Hey, I did this as a volunteer. I did this sponsored by my employer. I did this because a client said I could contribute it back." Any of those little pieces of data. And we aggregate that data, and we use it for a number of different things, mostly for recognizing the different ways that individuals contribute or that organizations contribute to our ecosystem.
And we've also used that for managing the marketplace, the drupal.org marketplace of Drupal service providers.
And that's where we potentially get into, maybe not get into trouble, but have to be careful.
The marketplace using contribution credit information is a really deliberate choice we make. We're trying to provide incentives to say, "Hey, if you're a business that provides Drupal services, you should let your employees contribute to Drupal. You should ask your clients if you can give your work back, because that's going to build your reputation, build your marketplace position, and most importantly, make the ecosystem stronger."
But that's a financial incentive. And whenever there's a financial incentive to do something, you have the risk of encouraging folks to figure out maybe how to game that system or how to do sort of the minimum effort for the maximum output.
And so that's been something that we have managed on an ongoing basis since we first created the system in 2016.
But most recently, we've put out some updates to Drupal.org policies towards the official sort of issue etiquette guidelines
about laying out more specifically the kinds of things we can consider abuse of the credit system and potential consequences for that. -Do you feel like there's been kind of an uptick in gamification or gamification is not the right word, but in gaming the credit system or issue misconduct that led to these, or was it more just planning for the future, maybe a little bit of both?
-I would say it's a little bit of both. One of the things that I want to sort of caution or preface this conversation with is that when we talk about sort of abuse and abuses, even in some cases too strong a word, because sometimes it's misunderstanding or lack of education, but when we talk about these sorts of things,
at least to date, we're talking about a very small fraction of the organizations or individuals involved in the community. And there's a lot of really healthy and positive contribution behavior going on in the background. However, what I have seen is an uptick in frustration by maintainers who kind of bear the brunt of having to see the incoming contributions, judge whether or not they are legitimate, well-intentioned, and actually helpful.
And so really what motivated these changers was groups of folks in the contribution recognition, feedback channel in Slack or the maintainer channel, saying, "Hey, we're seeing more of this. It's frustrating me." And we said, "Well, can we do something about that because our maintainers are sort of our most precious resource?" And we want them spending their time on doing great things with their modules, not on sort of trying to police their own issue cues.
-Yeah, and I think one of the things too that some people don't realize is it takes a really small minority to be very noisy. And the bigger your module is, the more likely it will be as a target for that kind of thing. So I have a couple modules that I've put up that are really small and only one or two clients use it. So honestly, if somebody spanned that issue cue, I probably wouldn't notice because nobody's using it. But if you're maintaining a large module that gets lots of legitimate issues and you kind of have to parse through the unhelpful ones that can be difficult.
So now that we kind of understand some of the backstory, can you give us a little bit more influence on some of the recent changes? Like, where do they center around?
-Yeah, so there's a few specific repeated patterns of behaviors that we're sort of concerned with and that we're trying to resolve.
And so the policies are around that. So to step back for a second, we've made a few categories of changes. One is we changed how some of the, like, pre-filling of the credit table works. That's the table at the bottom of an issue where a maintainer confirms which users actually will get credit for that issue.
I'll talk about those changes in a second.
We've specified specific types of contributions that are either truly unhelpful or they're sort of low-effort or automated in a way that it's not worth having individual users do them and they can kind of turn into a spammy situation.
And we've instituted some consequences for the organizations that might be encouraging or requiring that from their employees.
So starting from the top of that list,
that attribution table at the bottom of issues that maintainers use to grant credit no longer pre-fills check boxes for any kind of behavior.
It used to be that if you opened a merge request or uploaded a patch or other file, for example,
it would assume, "Okay, anybody who's gone so far as to do an MR or patch is probably doing a creditable contribution." And so it would check that box and that wouldn't automatically grant that credit, but it would assume that the maintainer isn't going to want to change that so that once they save the issue, it would be granted.
We're not doing that anymore. It's now up to the maintainer to deliberately choose which users in that issue should get that credit kind of without any pre-filling at all.
And so that's going to hopefully reduce one of the major categories that we were seeing recently, which is we saw some new users to Drupal.org
with accounts couple, three weeks old
opening an empty merge request on, you know, issues across 10 or more different projects. And obviously, an empty merge request doesn't really do anything, but what I think perhaps they had realized or someone had realized is that was pre-checking that box. And so if the maintainer wasn't paying attention and just saved, that would result in credits.
Similarly, we call out as explicitly sort of abuse of the credit system, patch re-rolls that either don't change the patch at all and are just reposting the same patch or, you know, things that don't apply, anything that can be considered sort of a truly unnecessary patch re-roll, re-rolling against a version when it doesn't need a re-roll,
different kinds of things like that.
A few more here. I won't necessarily go through all of them, but some major ones, you know, claiming that you've reviewed a patch, but just like posting a screenshot of get apply without any like real review of, oh, I've checked the functionality of that patch is fixed,
not responding to feedback on your code review or not responding most importantly to the maintainer saying, hey, please don't do this or please do this instead, right? If you sort of drive by, do something, the maintainer asks for changes and you're just ghosting them,
that we consider that really poor behavior and poor etiquette. Since after all, contributors working hand-in-hand with maintainers is really how impactful contributions happen. So then in addition to that, it's things like bulk posting of contributions, whether it's through automation tools or assisted by automation tools. So that might be bulk conversion of readme files or just recently the beginnings of seeing AI-generated content. And that's been not just like AI-generated patches or merge requests, but even what looks like AI-generated comments.
And we'll talk about stuff about a little bit more. And we're not currently blanket banning AI entirely, but we're saying there's a right way and a wrong way to go about using it. I want to dig in on that more. I thought that the policy as written around AI and automation, but mostly AI, it was very well thought out and well written.
And so not to read it or anything, but do you want to talk more about exactly how you came to that and sort of how that might allow for innovative uses without impacting it?
Yeah. So well, first of all, I'm glad that you think it's that it hits the mark because there's a little bit of nuance here and there's a lot of let me step back and say a couple things about AI in general. One is that it will,
nothing is going to stop that trainer from moving at this point throughout the technical world entirely. So there are significant potential like ethical concerns with the use of AI.
But the solution will not, there's no way to have a solution that's just we won't use it or won't allow it. I mean, we could try and say that, but from a practical point of view, it's going to take over the world in hopefully not a Skynet sort of way, but in a, you know, there's so much momentum and so much potential behind it for good or ill that we have to I think be pretty pragmatic about our approach to it. And that's kind of what we tried to do here. So the main things that we said are, okay, whether you're using AI for code, contents, you know, helping you with your comments or whatever, the things you have to do is disclose that you used AI to do so and you have to demonstrate that you reviewed it and made appropriate edits to assure that it applies to the issue and then, you know, test successfully or any of these things like that.
Those are really the most basic parts of that policy just because, you know, the kinds of things we were seeing were,
as some of you know, who've played with these AI tools before, they'll sort of lie to you if they don't know the answer or if the question wasn't quite right. And if you don't kind of look over the answer you get from a prompt, you just post it without thinking.
You get things like citations that don't exist. You get maybe function names that don't exist, various kinds of API calls and things like that. So it can be it can give you an awesome jumpstart. But, you know, if you didn't say, hey, I used AI for this and if you didn't check that it's working, it's a lot of noise. Yeah. You know, our hope is that if we have clear disclosure of the use of AI and people committing to doing the reviews, at least we'll kind of avoid that noise. And also it may help us as we sort of evolve further our understanding of how to use AI.
You know, Tim, the other thing is there's a reasonable argument and a potential concern that AI-generated code could be considered plagiarism or could be considered code where a copyright is not held potentially by anyone given the way, at least in the US, some court cases have gone. So it's not a hundred percent clear how to,
it's not a hundred percent clear how to license AI-generated stuff. But within the context of open source and especially Drupal code, we're sort of relying on the fact that any Drupal related examples used in these large language models to generate Drupal code came from GPL code. They couldn't have come from anywhere else.
So hopefully that's going to be fine, but we're also keeping an eye on the legal side of the AI conversation.
And I want to point out too something about AI that's important. You know, one of the big things that at the Drupal Association, I think you're going to start dealing with is the signal to noise ratio. And you mentioned noise, but right, if it's an unhelpful thing,
AI has the potential to just drown everything else out. There's just a lot of noise there. But the answer isn't, I don't think, and I don't think it ever will. And you kind of alluded to this, but I want to point it out explicitly. The answer isn't to just get rid of AI entirely. That ship has sailed and the strength of that signal can really accelerate forward. I mean, we have, we already do have kind of pseudo AI contributions in Drupal, right? Matt Glamins and the team, like the Drupal Rector teams, like contributions for automatically, quote unquote, updating Drupal modules from eight to nine and then nine to 10. You know, they're, I don't know if you'd say outright that they're AI, but they're, they're, yeah, they're not AI. But if you can build an AI around improving that for the next version or learning what kinds of things change or reading those docs and just make that a little bit more intelligent, like that just, like that's just an evolution of it. And I can see, I can see a place where AI does start to take some of those types of tasks, right? It's not going to get into completion, at least not yet. But those are the types of contributions that can be automated and done in an automated way that are helpful. And AI does open some possibilities there. We just, I don't think anybody has quite found where that line exists yet. If you're going to, if you're going to use AI to help you, you know, code faster, okay, fine. But like make sure you read the code that it's putting out. Make sure it's, it's like, it makes sense. Make sure it's good code, right? Before you just go ahead and like blank it, merge it into a, into a project, right? I think that was the key thing that the two pronged part of the policy was not that you have to declare it, but you have to review it because even, you know, we were testing using chat DPT to summarize the long issues that were just, had just been dragging on and didn't really, you know, it was hard to follow and have changed direction multiple times. And someone used it to try to summarize it and declared that. And it was complete garbage. Like it described an unrelated problem and declared with great certainty that it could be solved in one way that had been tried, you know, four or five years ago and proven to not work. And, you know, it's like, yeah, that, that review step is the key.
And that's the thing is that, yeah, I don't care. I don't care if you typed it by hand or you automated it, or you got someone to generate for you, you know, legality aside as long as you review it. And so that takes me back to the whole point is that, you know, if it's an abuse,
the self-review is probably the missing component, you know, just in general, like whether it's automated or not. So that was my question was like, you know, in my, in my slice of the Drupal sphere, I've seen a lot of issues where there's a lot of automation happening that are clear and annoying. And I've seen a couple of very, you know, loftily worded comments that are clearly not written by not chat GPT. But so that's just my little slice. How from your, you know, having seen a lot more of it, how much do you think this is actually happening?
So this is a good question because it's a little bit, you know, we haven't yet investigated using some of the tools that exist for recognizing like AI generated language and using them to like audit the issue queue, for example. I would say it's most visible in places like core, but even more so some of the top modules, right? Because this sort of sweet spot for contribution is often a high usage contributed module. We weigh credit based on how well, how highly used the module is. And it's easier to get something accepted into contrib than it is to core. So I think that's where it tends to appear. And that's where I think you see it.
But judging how common it is, certainly it's been there's been a surge in the last several weeks. I think I think heavily involved maintainers or reviewers.
Yeah, let's put it this way. Some of the folks involved in the needs review queue initiative report seeing, you know, three or four AI generated things every morning or automated generated things every morning when they wake up and double check the queue. I get things like that reports like that from a variety of different people.
And it's like, OK, if that takes people 45 minutes every day dealing with this stuff, that's 45 minutes taken away from whatever else they're doing.
And if it keeps growing across more people or taking more time for those people, that's not a good thing for us. So, yeah, again, I think you're totally right, Tim, that the primary solution is saying is almost people realizing kind of what the rules of engagement for our AIR in general, not even specific to us in terms of reviewing that work, testing it, works on my machine, all these kinds of things.
It's a little bit like when social media first came on the scene where people you know, you didn't know what you could post. People started posting uncomfortable things about their bosses before realizing that it could get like retweeted 10 million times and suddenly you could lose your job. Right. Like we don't have we don't have five years of experience knowing how to deal with this and where it's really going to work out. But in the meantime, we have to make sure it doesn't go so far off the rails that off the rails that it sort of poisons the well. So anyway, lots of important things that we're working on. And like I said, a legal component that we should not sweep under the rug, because as we and a lot of other folks look into this, there may come a moment where we have to address that legal standing again. And volume, I think truly is going to be one of the biggest problems that we're going to have to deal with, because already one of the biggest issues in Drupal, Drupal's health long term is burnout. Right. And larger modules already get a significant number of just human generated, I would say less than helpful issues, even though most of them are probably well intentioned. Right. They're they're not necessarily helpful or they're no one issue.
And, you know, I think it's great. I think people should open an issue if they see an issue. It's better if they search the issue queue first, but especially on a really large module like Webform, you know, there's hundreds and hundreds of issues. It can be really difficult to know. And well, you know, sorry, just to interrupt, but that brings up another really good point about the way that folks contribute and the etiquette of contribution, actually, which is that right, your first time contributing to an existing module should probably be to look at an existing issue. Right. Not to open some random new thing that you're opening across 10 different modules where you're never going to engage with those maintainers. That's not really contribution behavior. That's that's creating more work for a maintainer without helping them out. Right. So, you know, the other part of it is contribution is not something you do in a silo. Some contribution is something you do together with the maintainer, together with other contributors. And especially when I've reached out to individual folks to try and get them to change behavior. That's one of the things I've been emphasizing is, yeah, even if it's almost not the matter of what the specific action is, it's the way that you undertake to do it. Are you engaged with the maintainer? Are you looking for feedback? Are you reviewing things before you post it and are you seeing them through to the finish line or are you posting 10 of the same issue in 30 minutes where you can't you clearly can't be engaging? So, yeah, absolutely. And and, you know, just to to continue, like what I was saying is like just the amount of issues I think for them, it can sometimes as a maintainer of a large project, can easily become overwhelming dealing with just human comments. But there's still some empathy there because you know that people are experiencing real issues and generally they're trying to be helpful. Right. Taking aside, you know, tone sometimes can can get out of all that kind of stuff. But generally you can have some empathy because they're trying to be helpful. But when those unhelpful comments start coming in at one hundred two hundred three hundred a thousand a day and it's just about doing it. I mean, it just drowns out everything that that's helpful. And and you said we're only seeing your people are seeing three or four a day.
I mean, it's only going to go up there. Yeah. Oh, yes. If it gets bigger, it's a win. And how quick I mean.
Yeah, I can. And then he's Nick. Well, why don't you why don't you share us the the rosy bright sunshine view that you have? Well, not to make this an entirely AI driven episode, but there is actually there are some rosy sunshiney potential views. Like I have also worked with folks like some Google Summer of Code students recently and things like that. And some folks for whom English is not their first language, but English is the language of our issue cues where they were able to with the help of things like chat, GPT or other services, like figure out how to express what they wanted to say. Right.
There's been a little bit of that. But one of the you know, the problem is, again, signal to noise, right. The positive uses can often be drowned out by, well, that's cool and all. But I was dealing with 30 of these other things. So I don't have time to to think about that. Yeah, I mean, I think we can all agree that I have some positive uses and there can be a a pleasant, useful, helpful coupling within the issue. We just have to find that middle ground. I want to move off of the topic because, as you said, Tim, we could literally spend all all of our time talking about that and move towards.
Project maintainers and what they should do or how they should report or respond to, you know, an issue that is viewed as, you know, abuse or misuse of the credit system. Yeah, so that's a great question. And I think there's a few. Right now, there's a few possibilities for how you can can kind of alert myself or ultimately a member of Drupal Association staff about this. But strictly speaking, the the most official way to report an issue at the moment is to open an issue in the site moderator queue, which is just a project issue queue like anyone else site underscore moderator.
And we have a couple in there for for sort of our prototypes of how we were going to interact with and enforce this stuff. And those issues, generally speaking, are aimed. They're usually titled about an organization rather than an individual. We find it pretty unlikely that individuals would be highly motivated for this behavior unless they were, you know, intentionally or by accident being encouraged to do that. By an by an employer or organization, because there's no ranking of individuals. You don't get anything by gaming credit, except perhaps, you know, resume building on your profile, I suppose. But anyway, the maintainers can open an issue in the site moderator queue. They can also drop into the contribution recognition feedback channel, which is in Drupal Slack.
They can also email help at Drupal dot org if they're struggling with this kind of stuff. And one of us at the DA will sort of take a look and see see what's going on and what we can do there. So is that the site moderators queue, the plural, I think? Yeah.
We'll have that in the show notes.
So I just want to pivot, I assume the answer is pretty similar, but if I'm looking at an issue queue and I notice there's an issue that is suspect and I'm just a community member. I'm not really a maintainer of the module. Is it kind of the same thing? Yeah, pretty much exactly those same tools can be used at the moment.
I would say if you're a community member looking on, maybe look to see whether that if that maintainer has responded to any of those issues, obviously sort of respect what they've said. If they happen to find those contributions helpful, then maybe that one in particular is not not worth a report. But otherwise, yeah, a community member is certainly welcome to make a report as well. Something that's been suggested, but we haven't implemented yet. And I'm not sure I'm not 100 percent sure whether or not we want to do it, but is there's a sort of report as spammer option of which we've had for years, literally for just marking accounts that are just posting spam.
We're considering whether we would add another flag, a secondary flag, which is like report credit abuse or something like that.
And that would help us and site, site moderators with elevated privileges, be able to see those and like review their content and decide whether or not they need to do something about it. I mean, do you all think that's maybe a simpler way to do it or have feelings either way? The other thing we're worried about always is people can get into sort of a vindictive mindset sometimes when they lose sight of, hey, our main goal here is to take the real human beings who might be new to our community and give them the opportunity to do it the right way and build them up into really good contributors. Right. We're not we don't want to be focused on the punitive actions, but we may we may need to just so they don't overwhelm everything else. Yeah, I think, I mean, I think reported spammer is deficient as long as you can put like a reason there or maybe it's just report user and report comment or report comments rather. Yeah, and then you just provide some context for it. But I don't know that that has it's reached the threshold that's necessary yet. But I think it kind of dovetails with another issue, which is even just having the ability to have those comment nudges kind of in line. So the community working group has some yes comment nudges that they have that you can apply if there's a situation and just being able to say, like, hey, this comment isn't particularly it doesn't follow the guidelines of this module. Here are the guidelines. Here's what you can do. Just having an easier way or even just giving people more information about that so they know about it can reduce that effort, too. Like, if there's a way to just bulk edit a couple of comments that you know are like just add this comment that says, like, hey, please review the contribution guidelines would reduce that workload, too. And that's that is one of the things really quickly that I want to add, which is part of this process. We've right now we've talked about, like, we've made we've made a we've talked a lot about problem statement and not too much about sort of solutions and things like that. But part of this was drafting together with some other folks in the community who had an interest in figuring this out, as well as the CWG was drafting some responses. So some of these are issue comment nudge templates, which are in the CWG queue. Some of them are email templates that I keep for staff to use. And the very first ones are focused on educational materials. They're focused on a video that I produced that is an introduction to organizational contribution that talks about etiquette, where to find documentation on all of these things, has some do's and don'ts that I'm hoping will be used by various organizations as kind of part of their training program. So we send them that again. We send them the issue etiquette.
I offer to review a organization's like internal contribution plan if they would like me to as part of that message. And at the end, I do link to the abuse policy and potential consequences if we don't see like changes in behavior. And there's a similar message for individuals as well, which says, hey, sort of the same information, but it also says, please circulate with the leadership at your organization or your peers. And, you know, together, hopefully we can make your contributions of value a boon to the community. Have organizations taken up on the offer to review their contribution plan?
Yes. A couple of the ones that I would say folks actually initially reported as giving some trouble along these lines have then sort of reached out to me.
And I've got some ongoing dialogue with some of those. I've done some sessions. I've got some more scheduled.
If you're really close to this issue, you may have noticed, oh, gosh, for like for like two weeks, there was a bunch from from one organization or another. And then it stopped. That's usually because one of us sort of intervened and we're now talking to them about it and trying to get it fixed. So so, yeah, some of this is working. And I think what helps us is because we can tie usually tie it to to individual organizations that like that does lower the burden of how many people we need to talk to. We can hopefully rely on them to understand and then disseminate across new hires, the rest of their team, et cetera. Sorry, I wanted to jump in just because we were talking about kind of like the the reporting spam and like allowing the community to kind of help facilitate that. Right. And I think it's a really interesting idea. I would just basically, I guess, want that to go into like a queue for four admins to be able to use as like kind of a collection of data as opposed to like showing it on the screen or indicating, hey, this has been this has been tagged. Like, you know, because I feel like to the next point, like that could get that could get kind of out of control as far as like, you know, we don't want overreacting. I agree that we want these things to be again, our primary goal is to build up new contributors who, you know, some of these folks at these organizations might one day be strategic initiative leads, right? If they if they're able to build their skills up enough and learn the community and grow to no maintainers and build their skills. And that won't happen if they are so intimidated or scared away in the beginning. And for many of them, right, they may have literally been told their their engineering manager may have literally been set. Literally said, hey, you need to post 30 contribution patches to different contributed modules every day when they started that job. And if they were told that it's not really their fault, we don't need to name and shame those individuals. We need to get the organization to change its behavior and then make sure the individual is given the right education for what the actual behavior should be. So, yeah, I will say, like, you know, back in before we had the test by with PHP, CS and C-SPL and all the automated checkers and linters, there was a time where one of my first,
you know, forays anywhere near the core issue queue, I went and fixed a bunch of coding standards on a control project I was helping with. And it was the least helpful thing I could have done because I validated like 30 patches in the queue by making these like fixing white space indentation, whatever. And it was pretty disruptive and terrible and I didn't know any better. But instead of getting blasted or banned or anything else, the module maintainer reached out to me at the time and or theme maintainer in this case and kind of, you know, explained to me why it wasn't helpful and how it could be helpful instead.
And then, you know, I ended up becoming a core developer out of it. And it's been 13 years, you know, as of this week that I started working on core.
And so like those, whether it's the maintainer themselves, if they have the bandwidth or the nudges, as you said, from the C-W-G or from you directly and the DA staff, like those things can go a long way and are very powerful. And as you said, like the initiative, you know, it could be a future initiative lead.
And it worked for me. So I'm just glad that we're continuing that sort of calling people in instead of calling people out. Yes. But the other side of that is, as you mentioned, you know, after a few conversations of things that aren't really improving, there are some consequences.
And, you know, reading through the doc, I think that's the most unfinished section of the sort of plan in terms of how these these consequences are going to be meted out. And I don't know if you want to talk about that more. Yeah. So I will, I think. And so what we've done right now is we've got a draft policy published, which means where we will enforce it as written or we will continue to advise it. We reserve the right to do all of those things to help, you know, maintain that community, make it a good place to be. But essentially, there's sort of standards of conduct for being listed in the marketplace that existed before this and that we've now updated with sort of specific clauses related to credit abuse. So there's now basically just a series of warnings that go to organizations and internally on my side, on the team side there, each associated with sort of a different
template, conversation template that we have as this conversation. So like I said, there's sort of a first warning which says, hey, you may not be aware. Here are all of the sort of materials to understand what issue etiquette should be, what good contribution behavior looks like, the do's and don'ts, those resources that I talked to you before. It links to the abuse policy and it says, you know, if behavior doesn't change, you may see some sorts of consequences. And then those escalate if we don't if a conversation isn't started or we don't see issues and they escalate in a couple of different ways. One is we may actually temporarily suspend or ban individual user accounts. That's not the primary focus, but we might do it as a sort of stop the bleeding situation if one of them is like really hitting a ton of issues until we have enough time to like actually reach out and figure out what's going on.
If I do that, I then wind up sending an almost not exactly like a test, but I send it another educational message that includes a little bit of two or three questions at the bottom that says, you know, did you understand X? How will you do Y differently? And I don't unblock that user until I actually get a genuine response to those questions. At the same time, at the organization level, what we say is if we see this from your organization at large, you may get suspended from the marketplace listing. So you may be basically unpublished from the marketplace for a week, a month, perhaps longer. You may lose things like your certification status as a Drupal certified partner, which is one of the main reasons why people are trying to get contribution credits so they can participate in that program.
Yeah, we're still early. So like you said, Tim, those policies aren't 100 percent fleshed out, but we do have, I think, an escalation ladder of like four, maybe five interaction points that lead up to an indefinite marketplace suspension as the as the highest level. Yeah. So we haven't reached that yet, but I will say after we published that and published the blog post announcing that I definitely had a couple engineering managers at some organizations reach out to me and say, hey, we're working on something. We're working on something. So hopefully that means message received.
And just a couple of mechanical questions. If you ban a user, does that remove issue credits? Does that remove the comments? Is that or or do you still have to clean up historical or do you think you have to clean up historical moments? That's a good point. No. So right now it does not. There's no automation to manage, especially because right. We have also seen this from accounts that are like four years old. And so it's not like we consume every comment ever posted was one of these things. So no, there is there isn't currently an automated cleanup process. That's something that moderators or staff or something someone can do.
Yeah, what I would I feel like there was another point in terms of how we're how we're moderating the the actual individual content.
Yeah, I'm curious if there are any tools or planned or in existence that will help project maintainers or is that just kind of a manual process on a case by case basis? The DA will you know, with the site moderators team will kind of help out with at least for now, that's pretty much going to be the case. Again, adding a adding a flag to the to the comment to Tim's point, which I think is probably the best idea that is specifically a flag to hey, you know, this appears to be gaming or whatever that makes it easier for us to build a view of everything flagged there. Mass mass unpublished those potentially suspend those users. Those kinds of things might be the way to do that.
Those probably won't be tools that go directly into the maintainers hands, but something that, again, site moderators or staff to maintainers and community members would hopefully have access to those reports. But yeah, and yeah, I would say, you know, the other the other elements to to this, of course, is that these the oh, yes, the credit received on the issues, right? How do you sort of rescind that maintainers can already do that themselves. They can go back and uncheck anybody on any issue and save that change and the credits gone.
So the problem is, if people don't realize that a maintainer has rejected their credits, they may keep getting these spam posts and the maintainer is like, you know, the maintainer actually doesn't even have to manually uncheck anything anymore. They just don't have to check it in the first place. But again, if the users don't realize it's not working, that that kind of spamming behavior might continue. It doesn't recoil for the maintainers, right? If they keep hosting the things. Yeah, right. Exactly. It's it's it doesn't help the maintainer if they keep posting just that just so that it doesn't doesn't you know, the fact that it's not giving them credit anymore.
Although we are hoping that the fact that it's no longer automatically checked will make people realize that doing a bunch of drive by empty merge requests, for example, is almost certainly not going to get you very far anymore.
So as soon as they become aware that that's not no longer working, we're hoping it'll affect that behavior. The very last thing is the Drupal Association actually has a we have a meta credit field basically on organization profiles. So we can assign a negative credit value to organizations to say, hey, you know, based on the fact that over this last month, you've been you've gotten X number of issue credits that seemed based on this poor behavior. And hey, we know you maybe you're working on a fix, but we're putting a minus two hundred credits on that profile to sort of wipe those out until they fall off of the 90 day window for marketplace credits or various sorts of things like that. So we have that option as well to exercise so that they can't just sort of do it and then apologize to get away with it and still hold on to the credits.
Let that be a lesson to you, folks. Empty merge requests are nobody's friend. Yeah. So, Tim, I want to shift back to something we talked about a little bit ago with kind of the development of these policies and the development of the consequences for violating these policies, right?
I had had the pleasure of kind of watching this process unfold kind of in the in the earlier earlier iterations when these issues first came up. And I wanted to just talk a little bit about kind of who and how these policies were developed over what seems to be like the last year or so.
Yeah. I mean, there's a lot of people who've contributed to this. I would say, like, ultimately, the DA is responsible for setting the policies and like, you know, we will move forward with kind of the best decision there. Right. One of the things, one of the few sort of privileges of the DA and working there that that that we use over time is I can be the arbiter of good enough and say, hey, we have policy proposals and suggestions and and maybe some bike shedding. And I can say, all right, we've got something good enough and we're going to run with it for now. We may change it in the future. That's really important. But in terms of who's involved, there is a there is a board subcommittee that is part of the contribution credit system. And so and it's made up of the community elected members of the board.
So right now, in particular, Mike Herschel from the board has been involved in following along in some of these conversations and consulting with me privately on what we've come up with in that community spaces. A lot of community members participating in that contribution recognition feedback channel and including members of the C.W.G., including members of core, including maintainers of other modules.
And basically, I've sort of shared these draft policies around to get rounds of feedback and then make the best decision I can at least to start with with kind of eyes open to evolving them as we see how they work.
Yeah, it's worth noting that it has been seemingly a very democratic process. It wasn't it wasn't like you just woke up one day and decided, like, here are the policies that we're going to enact to combat this problem. Right. You've gotten a lot of feedback from a lot of different community members, a lot of different groups. So, you know, I personally appreciate your your efforts there. And I appreciate everybody that kind of went, you know, that worked to kind of get these in place. Yeah. And it's important. And there's always a balance. Right. We need we need the one of the most crucial things was the conversation, especially with C.W.G. members and some folks like yourself in the background was that we we really did want to make sure we weren't becoming overly punitive. We're not becoming the contribution cops, as it were, in quite the same way and that we still were again, as Tim said earlier, trying to call people in rather than call them out as much as we can.
Which we should get a T-shirt for that. I'm just saying anybody out there. That seems like a T-shirt worthy quote. And but yeah.
But, you know, and then from there, with with those voices to help us keep that in mind, you know, coming up with these these proposals for definitions of abuse and consequences. It's not always it's not always the easiest thing because, you know, yeah, you do have to factor in some of those other other aspects, like, you know, folks that are new to the community. That may not, you know, it may not be the first time. And, you know, I think overall, the tone that we've set here is is is is nice, but firm in like, hey, we want you to contribute, but we don't want you to adversely affect our, you know, our maintainers and our other other community members. And there's there's something important in what you said, which is I've realized more and more in other community interactions, completely other completely unrelated to this, that that that being firm or more important, just being clear is really crucial, right? If you can make it very simple and very clear, like what what the rules are like that actually helps everyone, right? People understand a little bit better, like what it is. They're not trying to walk the line around fuzzy edges and it just sends a clear signal. So you mentioned the C.W.G. before. Are they in terms of enforcing all this? You know, are they what is it currently just relying on staff? And do you think that's scalable?
That's a good question, especially the second question.
Right now it is relying so enforcement of this policy is is primarily relying on excuse me, on staff.
I will say that actually so, you know, the C.W.G.'s role is to foster community health and to mediate conflicts. And this doesn't really fall into those unless someone, I suppose, the maintainer chose to file a mediation request against a specific individual or specific company. Perhaps they would then get involved. But that's sort of, you know, it's more like right now we're treating it closer to the spam situation wherein, you know, we don't use the C.W.G. to mediate with spammers. We sort of handle it and enforce it based on policy. So, you know, at the moment, site moderators are helping to identify the issues, open them in queues, do temporary account suspensions if they need to. But DA staff are really responsible for actually enforcing consequences, such as the marketplace consequences, for example.
And partly that's because we have we literally have business relationships with some of these folks and we are going to someone. Someone is going to take some blowback from someone who's upset that they're losing something important to generating business for their company. And that should not be a community volunteer. Right. That's not the kind of heat that needs to be put on a community member. And it's, you know, our policy and our responsibility. So we're sort of taking responsibility for that.
However, like you said, is that ultimately scalable is an interesting question.
Right now, it's already been a little bit tricky because especially as we first worked out the policies, it's literally been primarily just me.
And that's definitely not scalable in the long term, but it did help me figure out, I guess, sense for what I thought the policy should be.
So we will have to think about scaling. We will have to think about whether, you know, can we empower more site moderators or is there a class of moderators who could be marketplace moderators?
Yeah, we will probably have to come up with a solution, but I don't think we have one plan just yet.
Yeah, I think some of the tooling things you mentioned will help, especially with empowering same moderators, because right now it's just kind of like, well, Tim's got it handled and we don't, you know, like the templates aren't yesterday available the same way and that button and the associated view that might come from it and everything. So yeah, I think it sounds good. Yeah, it feels to me like it should stay as a DA.
Well, let me rephrase that. I think the DA should stay as the enforcing body. I think you raise great points there, Tim, as well as it like a single community member should not have the authority and then should not be responsible for or a group of community members, right, for kind of enforcing these these rules. I think like it makes a lot more sense for that to come from the DA. And to be fair, I think it has a lot more like authority and credence to say, like, hey, organizations, you're not abiding by the policy that like we're trying to provide to everybody. And it's and it's our partner program that we're saying is being abused and affected by this. Right. So right. It's up to us to to for the integrity of that program. It's up to us to also enforce that. So so for the next two questions, I want to actually turn to Tim Plunkett as a maintainer of various various things, you know, working in core and so on and so forth. I'm wondering how you see these policies helping module maintainers specifically.
Yeah, well, I think, first of all, just having that contribution recognition feedback channel has been sort of cathartic for many maintainers where they just finally have a place where they can complain and have it be not just into the void.
Because, you know, you you whenever you're on the issue keeps long enough, you see things that make you roll your eyes at best and worse, you know, want to close your tab and walk into the sea.
And so having a place where you can kind of commiserate and see action come through that alone has been very powerful. And that's just not me. That's, you know, just many of the people in this channels.
You feel like that channel, you feel like that channel is probably acted as a little bit of a sounding board to write for you to be like, hey, is this another one? Oh, no, I just misunderstood that. Sorry. One hundred percent. And sometimes, especially with the chat, it's like, is this person actually writing this way? Like, did they become Nathaniel Hawthorne or are they just running this program?
There's been some fun guessing games with that for sure. But yeah, as I said, it's just it's been a nice place for it reminds me of the, you know, the more of the community that came in the IRC days where it was just a bunch of people hanging out and sharing grievances and talking through things.
Whereas a lot of the other channels are more action focused. This is sort of like, am I am I the only one thinking this? Am I the only one over here dealing with this and finding out that's not the case? So I think just without even going near the policy that alone has been very, very beneficial.
Obviously, like we just found it. I feel like we just found a subject for that channel. It's like the airing of grievances. Yeah. Yeah. Certainly part of it. What was the name of the channel again? Oh, yeah. Contribution, recognition, feedback. All hyphens. All hyphens. Yeah. OK, I'd like to kind of put that around because it sounds like even just clarifying that or creating that policy has been helpful. But Tim Plunkett, where do you see this going? Is there anything missing, anything from the policy, anything negative about the policy rectified? I think anything when we come to these do's and don'ts lists. I mean, that's the thing. This title is a show has been about issue etiquette. And I feel like we've talked about such a small fraction of that concept.
You know, just how to even when you're well intentioned, how are you supposed to do things? Even if it is backed by an M.R. and real code, you know, there's a lot of different ways to go about doing things. I think one of my biggest concerns and as you know, I was guilty of this in my early careers.
Waving the rules of people and saying these are the rules and you're not following them. So you're wrong when there are things in the gray area.
And I think that the more the more rules we have, the more loopholes are generated and the more people will then site those or or get offended or or try to police others. And that's where in this case, I feel like it's been better.
You know, that's been clearly a DA driven thing right now. The staff, you know, that it's not just up to the maintainers to enforce their own rules. And a lot of what I'm talking about comes to the core cues where, you know, there's been a lot of work done by the bug smash and the interview queue initiatives to kind of keep the core queue clean. But at the same time, there have been issues that have been miscategorized, you know, removed or closed prematurely or just misinterpreted because they've been open for 17 years and whatever. And if you weren't there in the in-person conference in 2000, whatever, then you don't have all the context.
And so I think there's a lot of those, you know, taking it as a blunt force object and beating the issue cue with the rules can can lead to to well intentioned reverse abuse sort of. And so, yeah, that's that's my only concern. But that's true with any set of rules. The more rules, the fewer rules you have, the less clear it is and the more rules you have, the more the pulse there.
Yeah. And I think that's one of the key things, too, is is. As far as it goes to scalability that you mentioned, Tim Lennon, earlier is you can't just blanket ban these people or you can't make a rule that says if somebody opens an empty merge request, that's BAM. We should review that person because I'm sure, especially in the early days when I was first figuring out the new merge request workflow, clicking on the buttons, I opened so many invalid merge requests. And and if somebody new coming into the project does that, it's very easy to actually do that. So do you want to flag every single one of those? No.
Maybe maybe some UI changes are in order to help prevent that or like some sort of way to say, hey, you're opening an empty merge request. Did you mean to do that? I want to jump in on that real quick, though, because honestly, I think that the creating of a new merger quest thing has kind of been solved in one sense, because honestly, creating merge request takes a couple seconds. Like it has to go sit there and you have to wait for it to go populate, especially for core because there's so many branches, it clones, everything. And so, you know, I no longer am frustrated when I see empty merge requests because they just saved me that time of sitting there and waiting for the system to do the thing. Now that I know they're not getting any credit for it and then one's going to accidentally credit them for it, it's no longer a huge frustration. So I know that was just one example you were giving. But like there are some things now that the the the switch, the hard switch to crediting certain things by default to credit nothing by default have just solved. And it's not true for all the things like the read me TXT or read me MD conversions that are like very annoying. The just just to balance it a little bit, but the automatic removal of credit has meant that actual contributions get less credit, too. So there is that balance. I think we do need to err on the side that we have. Right. Like you can't just automatically give it because it just causes my problems. But I'm convinced that a lot of the credit that I got for issues was because my name was automatically checked. And people I'm happy that the issues get merged because now that's one less patch that I have to add to a project. But a lot of a lot of maintainers don't go through the effort of checking and seeing who should get credit, which is their prerogative. But I think they have to now, though. I mean, literally no one gets credit. Yeah, I think I think a lot of project maintainers don't care about credit. So you think so. Do you think the commits are going through with zero credit given for anybody? That's a separate problem. I guess you're right. Yeah. But like I said, it I think that's OK. Like ideally, like if if a project maintainer doesn't want to deal with credit, that's fine. Like that's their prerogative. But just I just wanted to temper that. Like no credit is automatically given with the other side of it. Sometimes knowing that there is credit and that you didn't get credit, you know, that can be discouraging. The one nice thing and the thing that is so nice of having moved away from using commit messages to parse strings, to do all the things, is that credit can be retroactively granted or removed. And so, you know, sometimes, you know, especially in court, you know, there's like hundreds of people coming out of the thing and someone say, hey, I didn't get credit for that. And I'll say, oh, yeah, sorry. And then go fix it like it's it's it's, you know, very straightforward now. So that that is a nice thing. But I do I do understand your point. Yeah. Yeah. Yeah. And that's it. That's that's something I thought about a little bit as well. And it's, you know, it's it's that balance. And like you said, the whole reason we did it in the first place was was for that case, Nick, which was we were originally thinking, OK, yeah, probably people who uploaded patches of merger quest or someone who's going to want to be credited at the end of the issue. But yeah, I think, you know, I think having having maintainers be in a position where it's like, you know, they they either have to choose to engage entirely or not engage is at least more straightforward than, oh, some of it will be done for you. But some of it won't. So, well, that'll that'll be another sort of educational component and something to watch over time.
I mean, maybe, you know, again, it could be you, I think, just a way to say, like, hey, you're closing this issue and it was merged, but you're not granting anybody credit. Was that intentional? Like just a prompt that says that because maybe that there's still, you know, there's there's modules that have been on jupyll.org for 20 years at this point. And maybe they only come in and edit them once every two years and they don't even know that credit is a thing anymore. But yeah, there's other solutions to that issue. Like I said, I'm not I'm not advocating in any way that we add automatic credit back that that time has passed, unfortunately.
But you're right. That is a separate that is that is an important separate problem that we should not lose in the face of focusing again on being sort of contribution cops. Right. If we're if we get too focused on just the negative things, we will forget that, hey, we should also be encouraging our community to credit good contributors to inspire them to do more, to make it worthwhile that their company sponsored them and they gave back. So, you know, that's really the ultimate goal. So in some ways I want to spend as little time as possible on on the sort of enforcement stuff if I can. So I think the one thing we did, you know, as you mentioned, the enforcement thing is still kind of being hashed out in terms of the negative credit or, you know, the as you escalate through the plan steps.
But in terms of the rest of the policy as a whole, is there anything really planned, you know, coming that you have your eye on?
Yeah, well, so there's a few things. Part of it's a little bit of a wait and see. Like I said, I did have a couple organizations reach out to me asking to like review their internal contribution policies or even to participate in giving a training. So I'm going to actually see a little bit. I want to see how those go and see if that inspires any thoughts about policies. But the other one calls back to something you mentioned earlier, which is, hey, aren't we here to talk about issue etiquette in general as well? Which is to say, right now, there's an effort to revise the main section, the non-credit related portions of the issue etiquette page.
So among other things you called out earlier, and I think it was a really smart, important insight that just having a sort of.
Mechanical list of do's and don'ts turns into turns in this into a very sort of binary conversation. And so a lot of the drafted suggested updates actually sort of remove some of the a fair amount of the kind of do and don't language, especially the don't language.
To focus more on where it's needed, but more on like emphasizing some of those earlier points I made, which which is the fundamental point is that you're there to collaborate, you're there to problem solve together with maintainers and other collaborators. And if there was just one rule, it would be be a good collaborator in those issues. Right. And we have to define all of that and things like that. But I do think there's some updates there. And I think also it's not strictly a policy update, but I think that something we'd like to do is I think we would like to start some kind of contributor journey automation, like maybe when you post your first issue comment, we send you a preemptive email that has contributor information issue, all these things. We maybe we sort of have a little sort of drip campaign of contribution education. At some point, it says, hey, if you ever have the chance to go to DrupalCon, try the first time contributors workshop. Like a number of those things might be pretty cool, too. And so part of what I would like to see is is a little bit more sort of object automation and deliberate thought put into the contributor journey as a whole. And this issue etiquette conversation as a whole as we go forward.
I feel like you almost created a word there. Education where it's like automated education. Yes. Yes. I feel like that's that's perfect. Right. Because I imagine like it's my new startup when there you go, when we'll cut that part out so nobody steals it from you before you have a chance.
When you were talking previously, I kind of feel like, you know, when somebody signs up for an account on Drupal.org, like that would be a great point to say, like, hey, if you plan on contributing, here are some like tips and tricks and whatnot that would be good for you. And then same thing when they first they post their post the first issue. So I think they're like identifying those touch points, right? And building up that that kind of education path makes a lot of sense. Yeah, I think I think it would be really cool. I think, you know, the other thing I will say is, as all of you know, we've been continually making progress over the last couple of years on GitLab migration and acceleration into the use of GitLab tools. You know, most recently, we've got CI enabled for every project if they choose to use it using GitLab CI and they can sort of deprecate or stop using Drupal CI, all that kind of stuff. But that it will mean issues as well.
And that means that a lot of the things we're talking about in terms of flagging and moderation and even the issue credit UI are going to change a little bit when we move into GitLab issues. Now, we've already built what the new UI will be for the attribution interface and all of that kind of stuff.
So it'll be sort of a little interstitial area the maintainer can go to to do all the attribution that they're used to. And we have that working on a development site and and all of those things. We don't have to worry too much, but especially when we think about, like, report comment, right? We can't inject that necessarily directly into the GitLab UI. So some of these concepts, we're going to be looking at what we can do, how we might use GitLab bots to help detect and provide some educational information. We'll have to see. There's there's some open questions for sure about how we implement it moving forward.
Awesome. Well, Tim, we appreciate your time. And before we before we close out the show, I'm wondering, is there anything else you'd like to add? Anything we didn't didn't cover that you feel like is important here?
So, you know, going back to I think we talked about it a bit here at the end and and sprinkled it throughout. But right. The fundamental.
Project we are undertaking with all of this with contribution credit, with these concepts behind how we moderate things with the various tools or or this sort of drip campaigns we're talking about building. Right. Our whole goal is to foster a new generation of contributors to the project. And I just want to emphasize again that we should not lose sight of our goal in terms of mentorship, education, inspiring people, encouraging them. And so for those out listening, even especially those perhaps who've been frustrated by seeing some of this stuff come by and having to deal with it like, you know, join the Recognition Feedback Channel, vent a little bit, have some community with other folks in the same situation. But then also think about how can we use this as an opportunity to turn it around into something that makes the community stronger?
That's ultimately our goal and ultimately what we want to do. Empathy from both from both ends, I think, is is what we're what we're striving for.
Awesome. Well, Tim, thank you again for for joining us as always. It's been super informative and super insightful.
Glad to be here every every time is a joy. So thanks, y'all.
Do you have questions or feedback? You can reach out to Talking Drupal on Twitter with the handle Talking Drupal or by email with show at Talking Drupal.com. You can connect with our hosts and other listeners on Drupal Slack and the Talking Drupal channel.
You can promote your Drupal community event on Talking Drupal. Learn more at Talking Drupal dot com slash TD promo.
And you can get the Talking Drupal newsletter to learn more about our guest hosts show news, upcoming Drupal camps, local meetups and much more. You can sign up for the newsletter at Talking Drupal dot com slash newsletter.
And thank you, patrons, for supporting Talking Drupal. Your support is greatly appreciated. You can learn more about becoming a patron at Talking Drupal dot com and choosing that big become a patron button in the sidebar.
All right. We have reached the end of our show. This is the point in time where everybody gets a little bit of time to shamelessly self promote themselves. So Tim L.
Yeah, folks find you.
So I'm Heston at on Drupal dot org and most other places. I'm actually at Tim Lennon on Twitter.
I you could find me, Tim, at Association Drupal dot org or on the Drupal Association staff page. If you need to contact me to reach out or of course, Drupal Slack. I'm actually pretty easy to find and happy to talk with almost any community member about subjects like this or other ones. So please don't hesitate.
And a few plugs, of course, the Drupal Association is a nonprofit. We're a 501 C three in the United States. We serve a global community. We have some staff all over the world. We can't do any of these things without your support. So we'd appreciate it if you would become a Drupal Association member. If you're already a member, thank you very, very much. Maybe talk to your company about whether they could become an organization member or upgrade their membership status. And if you're interested in coming to a Drupal event, a major Drupal event, certainly look for the local camps in your area at Drupal dot org slash community events or consider joining us in France at DrupalCon Lille in October. Should be a lot of fun. Sounds lovely.
Tim Plunkett, are you going to France?
I cannot neither confirm nor deny at this point. There you go. Still still hoping, but we'll see. There you go. I'd like to take a second here again to thank you for joining us for the last four weeks. And like I said, any time you have a topic or you want to come on to chat about something, you feel free to reach out and we'll we will make make it happen.
If our listeners wanted to get a hold of you, how best could they do that? So I'm usually Tim Plunkett anywhere that full stops are permitted in user names. Otherwise, I'm Tim Plunkett and Drupal Slack is probably the best way to just hit me up and come talk about, you know, field UI stuff or layouts or the good old days, whatever you want. I'm around. And you could also find Tim in the contribution recognition feedback channel on Slack, right? Absolutely.
Nic Laflin where can folks find you? I am N-I-C-X-V-A-N VAN pretty much everywhere.
And for myself, I am John Picozzi Solutions Architect at EPAM. You can find me on all the major social networks, Andrew dot org at JohnPicozzi And you can find out about EPAM at EPAM dot com.
This has been Talking Drupal. If you've enjoyed listening, we've enjoyed talking.