About this Episode – What is Content Moderation

Keywords Trust and Safety
Modulate

In this insightful episode of the Player: Engage Podcast, hosted by Greg Posner, we dive into the often overlooked yet critical issue of mental health in the realm of content moderation. Our esteemed guests, Sharon Fisher from Keywords Studios and Chris James from Modulate bring their wealth of experience to the table, illuminating the complex challenges faced by moderators. They delve into the comprehensive mental health framework developed in collaboration with mental health professionals, a crucial tool in the moderator’s arsenal for combating the emotional toll of their work. The discussion highlights the natural yet potentially harmful reaction of dissociation when confronted with disturbing content. Our guests emphasize the importance of maintaining a connection with one’s emotions to avoid the dangers of prolonged dissociation, such as burnout and a loss of self-awareness.

The episode further explores the application of therapeutic techniques like cognitive-behavioral therapy (CBT) and dialectical behavior therapy (DBT), providing moderators with practical tools to stay grounded and emotionally aware. The speaker’s personal experiences add depth to the discussion, offering a relatable and humane perspective on handling traumatizing content. The conversation also sheds light on the cumulative impact of moderating toxic content, underlining the psychological and neurological consequences that can insidiously influence moderators’ mental health. This leads to a broader discussion about the necessity for robust support systems, including technological interventions for content filtering and structured escalation pathways, to safeguard the well-being of those on the front lines of digital content moderation.

Overall, this episode is a compelling exploration of the mental health challenges in content moderation, offering valuable insights and strategies to support moderators in this demanding yet vital role. It’s a must-listen for anyone interested in the intersection of technology, mental health, and the human element in digital content curation.

Transcript

Intro: 00:00: 00:15: Welcome to the Player Engage podcast, where we dive into the biggest challenges, technologies, trends, and best practices for creating unforgettable player experiences. Player Engage is brought to you as a collaboration between Keyword Studios and Helpshift. Here is your host, Greg Posner.
Greg Posner: 00:16: 02:20: Hey, everybody. Welcome to the Player Engaged Podcast. I’m Greg. It’s my pleasure to bring you a very special episode of the podcast today about a very significant topic in the gaming world, the role of trust and safety, with a special focus on the challenges faced by moderators and particularly during the holiday season. Today, we’re exploring the crucial role of trust and safety in gaming. We’ll start by understanding what safe gaming environments entail, examining the roles of moderators and the types of games that depend on their vital contributions. The journey will take us into the core gaming safeguard mechanisms. We’ll then venture behind the scenes with a day in the life of a moderator to understand their daily challenges and the balance between proactive and reactive moderation. Following this, our discussion will pivot to the role of AI and trust and safety, delving into how AI is revolutionizing moderation by managing routine issues and prioritizing critical cases. Today, we’re joined by Sharon Fisher, a trust and safety veteran with over 15 years of experience in the gaming and social platform industries. She got her start as a moderator for the popular kids’ MMO, Club Penguin, which was acquired by Disney in 2007. She held several roles at Disney until 2016, when she joined TwoHat by Microsoft, a moderation software company. She founded Real Gaming Consulting in 2020 to help platforms create and maintain positive community interactions. Today, Sharon leads a trust and safety team at Keyword Studios, where she sees the team of over 15 strategists I’ll fix that one, and over 300 superhero moderators. The unit has established itself as an industry leader by prioritizing superhero well-being and optimizing workflows through the AI and HI approach. We’re also joined by Chris James of Modulate. Chris has many years as an audio data specialist, and Modulate is using machine learning and AI technologies to create a safer and more inclusive voice chat experience with their tool called ToxMod. So that was a lot. I usually don’t say that much. But thank you very much for joining me, everyone today. Let’s jump into this. And before we do, Sharon, you want to give a quick introduction of yourself and

Sharon Fisher: 02:20: 02:44: I mean, I don’t think I can top that one, Greg. You’ve said it pretty much all. Hello, everyone. It is a pleasure to be here. I’m super excited to talk a little bit more of how we do it, why we do it. The more that I go around the globe, the more that I learned that not everybody knows what a moderator is and what we do behind the scenes. So looking forward to connect that with AI and Chris’s expertise. So

Chris James: 02:46: 02:58: Thanks, Chris. Sure. Hi, everyone. My name is Chris James. I’m the Audio Data Specialist at Modulate. Thank you, Greg, again, for that amazing intro. I work on the ground floor for studying the voice chat data that we use to develop ToxMod.

Greg Posner: 02:58: 03:15: Cool. And both these are really interesting, right? We have Modulate, which is an actual tool that’s being used, that’s analyzing voice in online games, which is nuts. I guess maybe, Chris, you want to give us a high kind of level, a high picture idea of what Modulate’s doing? Sure, yeah.

Chris James: 03:15: 03:55: So ToxMod is a product. It’s a AI solution that helps go through multiple instances of voice chat in a game and pick out only the most toxic material. So a lot of the stuff that we moderate for is stuff like hate speech, stuff like harassment, things like bullying, and being able to escalate those to moderators, human moderators, so that they can take the proper action needed solves the old problem that 2Hat did as well with text, where you’d have to go through hundreds and hundreds of instances of voice chat to find anything toxic. But with our solution, it escalates that stuff for you, and you don’t have to go through everything.

Greg Posner: 03:55: 04:26: That’s crazy to think, I mean, that these tools even exist out there that are monitoring both voice and text. I could understand it’s easy to understand and read text, but once you start analyzing voice and kind of understanding that there’s a time and latency and how it’s all happening so quick, it’s kind of this mind-blowing thought that, hey, there are people behind the scenes that are protecting these people that are playing online. Before we get too kind of deep into the subject, let’s start high level and understanding what trust and safety actually is. And maybe we’ll start with Sharon. Are you able to kind of give us a high level understanding of what trust and safety is in gaming?

Sharon Fisher: 04:27: 06:13: Let’s go very, very high level, because I think trust and safety itself is something that continues to be shaping and finding its own place within organizations. So trust and safety, the way that we define it at Keyword Studios is the area that will take care of our gamers, but also our superheroes, making sure that content that is not aligned with the community standards of our clients is not present on their platforms. But also if a content that is a local happens to be within the platforms were able to escalate it for real life. and real time action within the authority. So that’s one tiny piece. There’s the other side, obviously, like partners and technology that helps us a lot to make sure that we get there at the right time. It is not more about censorship or anything like that. It is technology at Keyword Studios is our best ally because it just helps us. focus with that. Now, what else is trust and safety? Police new policies are coming and everything that has to do with privacy. So we also have to be aware of any and every that are happening right now and that they’re boiling currently. Again, working with our partners on technology really helped us rather than going one by one policy and trying to apply in every single of our projects manually, having technology behind really helped us get there. So what is trust and safety? Caring for our gamers, pretty much the real life world protection through the internet and pushing the envelope to also think about superheroes slash moderators while they are doing it.

Greg Posner: 06:13: 06:15: Great. Chris, is there anything you want to add to that?

Chris James: 06:15: 06:30: I don’t know. I think Sharon covered that beautifully. The other thing I would say about it, I think, is just looking at community health on online spaces. I think Sharon mostly covered it, but I definitely want to make sure that we have healthy and happy communities.

Greg Posner: 06:31: 06:52: It’s an interesting thing that you both brought up. Sharon, you were talking about kind of not just policing the gamers themselves, but also protecting and the moderators themselves. So let’s start with maybe what is the role of a moderator? And Chris, I don’t know if you want to take that one. I don’t know if I want to call out each question. You want to just raise your hand. But Chris, you want to talk about the role of a moderator and kind of maybe what your role is?

Chris James: 06:52: 08:10: Sure, I can. So a bit about my role first. So I am Or at least I started being one of the data labelers to help train our AI solution to know what is and isn’t bad. So for example, I would have an audio clip for voice chat and I’d have to tell, you know, inside a tool what is and isn’t bad about that clip. In terms of being a moderator, which my role has evolved into being a sit-in moderator for us behind the scenes, what I mostly do is look at actions, see if our tool has taken the correct action to escalate that thing, and also tell the people who are at the studio that we’re working with how bad is this, what should they do about it, and also help inform them on community standards and things like that. In terms of, you know, my particular experience being a moderator, I would say that a moderator is kind of a combination between a first responder for a situation, and then also a bit of social work, because what they have to do is figure out the right thing to do about that thing. But curious also to hear what Sharon has to say about that.

Greg Posner: 08:10: 08:22: Yeah, Sharon, I want to hear your perspective because you are managing a team of these superheroes, right? So how do you look at the role of a moderator and how do you make sure that they’re getting the right well-being? support.

Sharon Fisher: 08:22: 10:38: I think that’s why we are partners with Modulate because we align a lot in our visions. But I think I will add to Chris’s description of the role. Those two are really key, but also they have to be experts on pop culture and the specific to the project lingo. But also we need to look into bias and multiculturalism. So But as you can see, the role becomes more and more rich. Something that and why superheroes and why we’re upgrading them to that is because I think for the longest time, all this work has been done, but on the deep background. Right now, what we’re trying to do is bring up to the attention that, first of all, superheroes exist, but also the challenges that it represents for them. I will say the role at Keyword Studio of a superhero is utilizing all of these different skills, but also looking at different signals. And that’s very important for us, especially nowadays, where the way that I pitch the idea of a superhero is imagine that you are in a room surrounded by a thousand people, sometimes way more than that, and everybody’s looking at you to make a decision. Nowadays, thankfully, everybody has a voice, and it doesn’t matter what decision you’re going to make, somebody’s going to push back. It’s going to be the majority, it’s going to be the minority. But with that, we also have the responsibility of trying to understand as much as we can the context of the situation. So that’s what I’m talking about signals. And that’s why, again, technology can bring us that signals at the right time to make sure that when superheroes are making a decision, they have look into the different pieces and make us as much as informed of our decision, that when somebody pushes back, we have the receipts to call it somehow. So it is not just saying yes or no to content or just passing it along. There are so many pieces that we are trying to balance to make sure that when you come to our partners or to Keyword Studios as the HI side of things, we are making a decision based on all of these different pieces.

Greg Posner: 10:38: 11:33: Something that kind of both overlapped on what you both said and I think is a fascinating thing. Chris started off by talking about training the models which I think is fascinating that you’re listening to clips and talking about what’s good and bad. But then Sharon, you mentioned kind of pop culture and that’s a fascinating thought because I hear my kids talk these days and I don’t understand all the words that they’re saying. And you know, when you start talking online, A, You have the benefit, for lack of better words, of being autonomous or anonymous, so people tend to be mean-spirited when they’re anonymous. But also, how do you train on all these new lingos that get born, seem like every few weeks a new term is out there? How is the model continuing? How do your agents also kind of… I guess these are two questions. The question to you, Chris, would be, how do you continue training it? And Sharon, how do you kind of keeping your team up to date on what’s what’s happening out there? Maybe we start with Chris.

Chris James: 11:33: 12:53: Sure. So in terms of iterating on what the bottle can catch, new language is something that we rely on, partly on data labelers, but also on just root research, just going in, finding language, and finding the new stuff and training on that specific new thing. I think A lot of times when we talk about this space, with voice, it gets a little gray, right? Because especially in text, there’s a lot of ways that people have figured out how to circumvent the measures that technology has helped us make. And in voice, we haven’t gotten necessarily that far yet, although there are some cases. But Regardless, in the meantime, we do have sort of a more complete sense of what’s going to be said. I’ll also call out that each platform and each format, to me, it seems to have a vernacular. So, for example, the way that you say something in voice isn’t going to be the same way that you type it out in text. And that’s partly just because of the medium, but also because of the culture of each one. So being able to iterate on that, it just means we have to be as in touch as possible with what’s going on in the data. And a lot of times, that means research.

Sharon Fisher: 12:53: 15:01: And I think that’s, again, where we close the loop, right? From our side of things is the HI, or human intelligence. And what we are betting here and how we are leveraging the globality that we have as a team, because, again, 350 moderators around the globe, is these people are the ones that are like every day learning all of it and seeing it sometimes for the first time, right? So the way that we have built this engine is it is not, again, just saying yes or no, but also learning from what we are coming. across, and then not just learning, sharing those learnings within the community of moderators to make sure that number one, they’re aware of what the trends are, but number two, we are also able to pass that important information, even if it’s just something that is going to catch for two days and then the internet will forget about it, to our partners to make sure that in technology to make sure that that power just like multiplies, rather than us going into each of the projects and be like, okay, Capife, or whatever Trump tried to say at one point, added to project 123. no, like we we rely on our partners to help us like push that harder. In this case, we’re talking about lingo and just like internet stuff that didn’t really impact but what happens when these new term or new trend is actually impacting or just inciting people to hurt themselves or hurt others like that’s where time is of the essence. And that’s why again, and I know that I continue to say this, but we love technology because we find it, we inform it, and then technology just extrapolates and vaccinate us by vaccinate everyone to call it somehow maybe vaccination is not a good word, especially in these times, but it just spread it over all of the different networks. And then for the next instance that it happens, then our networks are already aware and they can act accordingly.

Greg Posner: 15:01: 15:28: Something you said that I think is interesting is the fact that you’re managing 350 moderators. And from the outside looking in, that sounds like a really high number. What types of games are typically utilizing moderator type roles? Like if I’m going to go out and buy the latest Call of Duty, right? What’s it protecting me from in that game? What’s it not protecting me from? What should people who are listening that maybe are parents to younger kids be understanding that where the protection starts and where the protection ends?

Sharon Fisher: 15:28: 17:48: That’s a great question. And I want to say, 15 years ago, the focus of protection were kids, right? Like, the internet was really new. There was not a lot of understanding what will happen if a kid wants to go into the internet. But today, I can say like every single network game that has anything that has to do with user generated content. So in the form of text, video, images, or voice, needs to have moderation, like it is just the standard now. Thank you. We finally are at that point where people thinks about these things. And I think the degree to your point, Greg, on how much moderation or how many moderators even you have into each of the projects, it totally varies. There is no perfect formula in Call of Duty, to your example. Obviously, the moderation, it might be integrated first on the technology side of things, and then we’re going to be looking at maybe on local content. That will be the main concern, right? In a game that is for underage, our concerns are those ones times 100 or 1,000, right? So we have to focus on the number one, what is the audience of the platform or game that’s going to dictate a lot of the, oh, look at that. That’s going to dictate a lot of the strategy on what kind of moderation and even what kind of tooling with doing it, right? Like we want to make sure that these kids are not entering the birthdate of their parents just to pass the first filter, right? Because believe me, kids are very, very resilient and they will find ways of trying to pretend that they are older. in order to get to these platforms, right? So to your point, for parents to be aware that there’s moderation, yes, there’s moderation. I will say do not assume that there’s moderation in every single app that is marked as kid friendly. That’s something that is a misconception. There is moderation from those companies that are actually buying or really aware that there’s these challenges that they want to protect their users from.

Chris James: 17:48: 19:02: I love that point. I love the point that you made where any online platform should have moderation. I think that’s a really, really important thing. And I think a lot of times the usefulness of moderation is questioned just because it’s not something that we’ve really known about as a problem statement for a very long time. I think for a very long time, the internet was very unregulated in that way. there’s a lot of danger out there on the internet. I know like, you know, there’s, there’s a huge concern right now, especially in the government, with violent radicalization. And, you know, things like sex trafficking and child grooming and stuff and being, you know, being able to tackle those problems means to me having moderation. And it’s really important, I think, for people to understand that. I would also say, too, that, you know, the problem statement of like, you know, how do we protect people on specific platforms with specific audiences, right, is a really big problem statement as well. And I think that’s something that, you know, keeping in sort of tandem with, you know, the incarnate culture on those platforms is really important.

Greg Posner: 19:02: 19:28: It’s, you know, we, we hear a few months ago when Elon took over Twitter, or I guess we’ll refer to it as Twitter. Uh, you know, he got rid of the whole trust and safety team. And I think some people kind of just said, okay, I don’t know what that means, but, but either of you, I’m not sure if one of you feels more passionate, but like, what does it mean when a huge platform like Twitter removes that trust and safety team? Is it going to affect its users? Sharon, do you want to chime in there?

Sharon Fisher: 19:28: 21:47: It definitely will. Like I feel I felt like we were literally coming back to 20 years in time, because that’s exactly and I think that the piece with moderation is that there’s a misconception about it, right? So there is number one, the moderation means that you’re going to be taking away expressivity, right? Like, you are telling people what’s allowed to say and what’s not allowed to say. So So that’s challenge number one. Challenge number two, I believe that from the development side, people is not even thinking about moderation or tooling or anything like that. The challenges that Chris just mentioned on radicalization, child abuse, any of them. If we were to design the platforms with that in mind from the beginning, that will be another story that we’re facing today, right? Developers think that moderation might be just catching the F word everywhere, right? And there’s so much more to it, like the way that we are creating these different platforms, this engagement is so key that Trust and Safety steps up and reviews and it is not for Trust and Safety to dictate what the process and what the project should look like. That’s something that we really want to send that message. It is more for me to tell you, hey, by the way, that’s a really cool feature, you’re going to be uploading UGC pictures. Have you thought that that could be actually a child pornography picture? And you will be amazed of how many people, because that’s not their role, that’s not what people do, they just create amazing futures, amazing games, and that’s what’s in their mind. How many people are like, holy shoot, no, I have never thought about it. Can you please tell me more? Again, we’re not trying to cage, like the creativity or an innovation, anything like that. But just giving them a little bit more of background of like, if you were to do this, it might decrease all the number of child abuse pictures that we see. And it’s crazy to think about that you can create that impact. So when people is usher out of the door for trust and safety, those are challenges that you are opening to your platform.

Greg Posner: 21:47: 22:03: I remember hearing somewhere, I forgot what it was, but typically when you create like a game with a map editor, the first thing that’s done typically is someone will design a map with male genitalia and they’re in the shape of that. And it’s just like, come on guys, like, is that really… We call it TTP.

Sharon Fisher: 22:03: 22:12: TTP, time to penis. As soon as you go live, how long it’s going to take them to either try to draw one or to pass the word through the filter.

Greg Posner: 22:14: 22:43: As far as we come as a human race, it’s like we take 10 steps backwards at certain times and it’s mind boggling. But the way you’re going with that, Sharon, I’d like to dig more into kind of the day in the life of a moderator. And, you know, when I first started at Keywords, I never really understood what a moderator was. It’s kind of like, oh, well, it’s nice that they’re getting all that, but what’s the real hardships and challenges? And you don’t understand that. So maybe we can start with Chris here, because since you are in that role, what is the day in the life of a moderator?

Chris James: 22:44: 24:31: That’s a very good question. I’m very happy we’re talking about it the the day in the life of moderator is It runs anywhere from very dull and innocuous to completely terrifying and scary and the reason is because a lot of times when you pull up a moderator queue, you don’t exactly know what you’re going to see. You know, obviously technology we have to help filter those results before moderator sees them. That’s a great thing. It helps us keep them safe. And also just, you know, making sure that we have escalation pathways for different kinds of content. That’s also something that’s really important. But, I mean, you know, moderators could see anything from just innocuous swearing to, you know, some of the stuff we mentioned earlier that’s a lot more heinous. And it’s really important to understand that my hope for the future, at least, of moderation is that, you know, like Sharon was saying, they are treated like superheroes. And that we understand, like, how damaging it can be to go through that much content. I have a talk that I do at my work a little bit about the neuroscience and the psychology of how seeing toxic content on a regular basis can affect your brain, how it can make you act, how it can make you think things that you haven’t thought before that are bad. And I think Through like kind of researching and putting that together, I’ve kind of come to understand how taxing it can be not only to be a moderator, but also, you know, what kinds of expectations are good or bad for people to have of moderators too. a very much a whatever you get when you walk in is what you get kind of job. And it can be really, really tough for that reason.

Greg Posner: 24:31: 24:55: I love… I mean, I don’t love, right? But like, Sharon brought it up earlier, and you’re kind of talking to it as well, is the fact that people think it’s people saying, the F word or terrible words online, but it’s grooming, it’s trafficking, it’s all this scary stuff that’s happening in these games. And from your perspective, Sharon, you’re overseeing a group of moderators. So what’s a day in the role of Sharon Fisher’s shoes?

Sharon Fisher: 24:55: 27:19: Well, I think Chris really described like the challenges, but the other side of it is like, we just can give moderators a title of superheroes. I mean, like, yeah, you are the core of it. And thank you so much. I think there’s the other layer, which we call it a keywords, there are more for it. Right. So how we are responsible of making sure that these superheroes that are like getting up every day, might see the worst of the internet and then save life. And that’s why they are superheroes. What tools are we giving them? It’s not just a title and a cape and good luck. We have to armor them with that, right? So for those challenges, you have to make sure that you just don’t give your superheroes, psychologists at the end of the line. right? Like that, that’s nice to have. But that should like almost you should never get to that point if you are really giving your superheroes the right tools and what that looks like. And why keywords is pushing the industry towards that is like it’s an everyday care is really normalizing the factor like how are you feeling today? I’m not saying yes, we call it my fire here. And what it means is like, how’s your fire today? And just as a gift to tell me how are you doing today? That gives visibility also for your team to know how are you doing. Somebody might reach out around in Slack or Teams and be like, hey, what’s up? Like, how can I help you? Sometimes we’re at Coils every morning, right? And you’re just like rolling out of bed and you don’t want to do anything. And that’s human. Sometimes we’re a fire like in a building and we’re just like flaming and we’re super excited. So I think there’s like a lot to understand when a moderator is doing this job. to Chris’s point, all the impact that it can have, if they are not well taken care of, I call it just like are really damaging people’s souls. Because that is what it is doing this kind of job without having support a line of defense like technology. It’s something that in 2023, almost 2024, should not be happening anymore around the globe, we We have technology that has made it so far already that protecting superheroes too should be something that we do not only because it is good business, but because it’s the right thing to do.

Greg Posner: 27:19: 27:42: Are you able to kind of dig into that a little more? Because I know you had a very, I’m going to say hard stance of keywords that you want to protect your superheroes, you want to make sure they have specific benefits that will give them the mental health breaks, stuff like that. How can other companies that are taking a look at this do things or just understand how and what keywords and other companies should be doing to help their superheroes?

Sharon Fisher: 27:42: 31:13: So first off, while I’m working for keywords, and obviously our superheroes, something that I like continue to say, like, please take it, take the term superheroes, like it is not unique to us. It’s obviously not trademark, take it because everybody and every moderator needs to be shown a light and be aware that they need this protection. To your point, the depth of it is number one, who are you resourcing? do not resource only based on the language skills, right? Like that’s, that’s, that’s not enough. So the way that we’re resourcing is like a home made literally, and it took us probably like eight months to get eight months to get to it a test. a test that is not just only looking into language capacity and capabilities, but also, are you aware of your bias? Are you willing to receive this help? Because it doesn’t matter what of a strong program that we have, and we have seen it everywhere and anywhere, we all have all these benefits, and when do you utilize them? Never, or unless you are like in a very deep, deep hole, and we are guilty all of it. So making sure that number one, you are willing to take this help that we’re going to be providing is really important to us. There’s many other points when it comes to recruitment. The way that we onboard people, it is not just like these are the tools that you’re going to be utilizing. We stop, we take a day and a half to just sit down and be like, okay, now that you’re not in distress, tell me a little bit about what is it that makes your soul happy. And this is just kind of like a back pocket first aid kit that we’re gonna have there forever and ever. This is very personal. We obviously don’t publish it, but we want to make sure that people is able to be prepared for when and if These kind of like cases come and they are on this stress, right? So that’s just a little piece of it There’s a lot of that training and onboarding that we do but then again It is not just that and then go to work and then there’s a psychologist is waiting for you at the end of the tunnel there is the everyday care and the everyday care we had to go through every single leader within the organization to make sure that they are able to support our superheroes that they understand what to ask how to support them. Very, very important to say we are not giving the responsibility of a psychologist or a psychiatric to these people. We just ask them to ask the questions and to prompt and to make sure that they are again asking what’s your fire every day? Then we have the, that’s the daily, then we have the weekly, which is the one on ones, there is a lot of information coming to the superheroes during the week. But there’s also the monthly that we have models and these models can look like again, multiculturalism. Internet lingo, bias awareness, and we are developing these internally 100% because we want to make sure that it’s our superheroes that are guiding what is needed. So, Greg, I can probably go for like 700 hours on this because it’s how detailed the program is, but I’m open to anyone and everyone to reach out and the more that we can protect our superheroes around the gaming industry or social media industry, we’re happy to support.

Greg Posner: 31:14: 31:42: I love the fact that you made this one statement that says, you find out what really makes them happy inside. And I feel like it’s just this little thing. Like I imagine like a bucket of like, I like Sour Patch Kids right here. Like just like when I’m angry, like, all right, let me just take one, take a breath. I think that’s super important. And the same question for you, Chris, but more curious, like you are in this role. When you start to feel overwhelmed, what do you do? What helps you? And maybe if you don’t want to get that personal, you don’t have to.

Chris James: 31:43: 34:10: Oh, no, I’m happy to. I actually developed a whole framework with, you know, my care team, my mental health care team to actually deal with this for that reason. You know, a lot of what Sharon was saying is kind of my own practice as well. Just checking in with, you know, what’s going to motivate me today? Like, what’s going to make me feel fulfilled? And then also being able to check in periodically about, you know, how am I feeling? Like, where is my emotional state at? You know, what kind of things are coming up? You know, I think a lot of times when I walk into into this role, and I start to look at content, the thing I’m the most afraid of doing is disassociating. And I think a lot of times when people start to look at content like this, that’s the first thing they do. And that’s good, because they need to separate themselves from the content, it’s very important that they don’t feel like it’s actively happening in front of them, and that it’s their responsibility. But at the same time, if we do that, for a very long period of time, we can end up just walking away from ourselves. And we can end up like, you know, losing that motivation, we can end up losing that sort of internal barometer of how am I feeling today, like what things are affecting me. And then all of a sudden, you know, we’re three or four hours into a shift. And we’re feeling completely burnt out. And we don’t know why, you know, we don’t even have a place to put it. Because, you know, when you’re looking at that stuff for so long, and you’re disassociating for so long, it’s really hard to just come back into your body. So there’s a lot of stuff that I’ve borrowed from some of the trauma therapists that I’ve had some of the stuff I borrowed from generalized therapist, you know, having to do with like, CBT work and dbt work that we’ve done, And then also just specific, you know, trauma focused therapy techniques that I use, you know, to, to, you know, not just like calm myself down, but keep myself in touch with myself and what I’m doing. Because that for me, at least, is the biggest challenge. You know, I’ve had a lot of experience with this and a lot of experience with trauma in my life. So, you know, I have a lot of tools to deal with if I see something that is traumatizing to me or see something that affects me in a really hard way. Right. But I think the thing that’s really hard to realize is that the small stuff adds up and it adds up pretty quickly if you’re not careful.

Greg Posner: 34:10: 34:37: Yeah, that was well said. Thank you. And I could imagine just, first of all, understanding what makes you tick, understanding how you could control yourself and disassociation, I think, is important, knowing that it’s not actively happening in front of you. I think that’s really well said. Something I know we kind of spoke about is a lot of the moderation that’s being done is reactive, right? It happens. We take care of it. But what can be done proactively to try and get in front of it before this even begins? Maybe, Chris, you want to start with that one?

Chris James: 34:38: 35:37: Sure can, yeah. So a lot of the space that Modulate works in, at least, is a lot in the gaming space. And generally, for voice chat, the path of escalation that we’ve seen is a player reports a thing, the studio takes action on that thing. That is what we consider reactive moderation, right? Something that 2Hat kind of pioneered with their text chat moderation technology, but also, you know, modulate is some, we’re trying to do this in the voice space a little more is proactive moderation, right, where we have technological tools to go through those voice chat instances, and proactively escalate something that’s happening, even if a player doesn’t report it, or while it’s happening in real time, so that if someone, you know, if a moderator can get to that voice chat instance before something really, really harmful happens, we can prevent even more harm from going on.

Greg Posner: 35:37: 35:56: I mean, just as in technology, I think it’s amazing that you’re doing this almost real time via voice and you’re able to do this. I think it’s just mind blowing. I don’t know if there’s a question to be asked there. Just it’s an awesome technology. I don’t know, Sharon, is your team, how do they handle with this proactive in a similar way?

Sharon Fisher: 35:57: 39:06: Proactive moderation is all that we call moderation. I think after the matter is more like damage control and cleanup, to be honest. So what we are always aiming is to make sure that proactiveness happen and it goes again. from the design phrase, even like from the ideation phase. That’s why we’re calling now responsible moderation, right? Where it’s like, okay, let’s ask these questions before you go to town and add all of these features. Let’s make sure that when you add them, you have these pieces in the back of your mind. then let’s make sure that we create through terms of use the idea, which nobody reads them, breaking news, but that we as a company have an understanding of what is it that we want this community to look like. Where are we drawing the line? And that’s where to your earlier question, Greg, like, what does it look like? Who knows? Like, it depends, obviously, in the title, in the audience, all of those pieces, we want to make sure that people is able to say it’s F-ing awesome, or F-ing love this game, like, that’s totally fine. You should never be able to say, go and F all the Mexicans and stuff like that, right? So we work and we want to make sure that these kind of cases are not even present. Why are we going to be worrying about banning and muting and all of that stuff, slap hand, slap hand, if we can prevent those ones from happening, right? I think the other part of proactive moderation and we’re trying to do moving forward with moderation is like not focus so much on the negativity of it, right? Because for studies that are out there, like we can see that there’s like four to 6% of users that are actually toxic users, right? So in focusing just on them rather than the 96% of them, we are losing an opportunity of like actually creating communities that are more engaged in stuff. So it almost feels like when it comes to moderation, those things and the Mexicans and stuff should come out of the box already, you know, like, we should like, there’s no question that that’s wrong. So those should come out of the box. And let’s focus now and like this guy or this woman that is actually saying something on the lines of like, hey, nice meeting you, welcome to the game,” or things like that, and give them a badge. Very simple and very top-of-mind ideas, but when it comes to preventive moderation, there are so many tools that we can really utilize prior even going live, because you’re going to spend, what, if you’re lucky a year to five years to like focus on your game and developing all this cool art and like all of the love and care that goes into putting these. And then at the end, it’s like, Oh, I’m not gonna play it because it’s a toxic game. Like, let’s take care of this from the beginning. And that’s why pre moderation for us is literally what can dictate the success or not of the project.

Greg Posner: 39:08: 39:53: I like how you were talking about things that should be just coming out of the box, things that should be standard across the board. And I think along those lines, AI is going to be able to help with some of this, right? Maybe it can help provide preventative in certain cases, but it could also kind of provide some relief to the moderator themselves based on what type of content is coming in. I’d like to talk about how AI is affecting the day-to-day, if you’re seeing it yet, or what things might be on the horizon that excite you. So Chris, from your perspective, is AI, I mean, you’re doing such cool stuff with voice and technology, so I’m sure AI is already involved, but like, how is AI evolving in your mind to help provide a better future for moderators?

Chris James: 39:53: 41:58: That’s a great question, Greg. There’s definitely a lot of strides that are being made in AI right now, and I think there’s equally as many strides being made in the view of AI that we have on the outside. I think from someone just being an internal person who does this, I think the coolest thing that I see AI doing right now, giving us confidence in large-scale like systems and communities that we could not have if we did not have it. I think something that we’re focusing on a lot at Modulate is just how to do this at a scale. How do we leverage people as much as we can and how do we take the burden off of people so that we can make sure that AI is either in front of you know, the more obviously bad things like hate speech and, you know, rampant sort of, you know, racism and other things like that. But also, like, you know, how do we make sure that, you know, we we are escalating things like, you know, violent radicalization, and escalating things like child grooming, because the most exciting thing about doing this at scale is that that 0.05% of data that contains that content is something we can find now, instead of just having to ignore it or having to look so long for it. And I think the fact that we can start to protect communities that way, once we start to develop you know, different models for that sort of thing. That’s probably the greatest role that AI has, is just making the process of sorting through online spaces extremely simple and, you know, fast too.

Greg Posner: 41:58: 42:09: Yeah. Speed, right? Efficiency, being able to get in front of it before it spreads. Sharon, obviously you work at Keywords where we have tons of technology, but how are you looking at utilizing technology to help as well?

Sharon Fisher: 42:09: 42:12: Sorry, Drew hit that to me. How am I looking?

Greg Posner: 42:12: 42:19: AI to help assist the moderators in making… It’s the core of it.

Sharon Fisher: 42:19: 45:54: Seriously, I think we have gone through this and I like to paint the picture of how we came to be with AI and HI. We started moderation back in the time with just humans doing it, right? And then it was not artificial intelligence back of the time. It was just like looking into the patterns and just like the words, let’s block the words. For me, for example, a club penguin, it was snowballs, right? Like, the penguins will throw snowballs. But if the chat was too tight, then you couldn’t say snowballs because the second word was a sexual word, right? So we had to find a balance of it and understanding that. So then once we got really good at it, like the decision was like, oh, yeah, like now automation is going to take care of all of it and forget humans because now we want to like reduce costs. Really quickly, we realized that that’s not scalable, obviously, but also languages and nuances and all of those pieces like, oh, wait, we might just need some kind of automation and humans. And then we came into the era of like, yeah, but technology didn’t tell me, well, the humans didn’t tell me, and then just pointing fingers because we were all fighting for our place within moderation. I think today we’re finally at a point where we understand that we all bring value and that we both need each other, right? So again, when we are talking about gaming, which is our main focus at Keywords, the sheer volume of content that we’re seeing is nothing that we have ever seen. Like I will say, the last five years, it has like triple just because of the circumstances and everybody being at home and stuff. So there is still not understanding from my end on thinking that why are you not utilizing technology to help you sort through all of these pieces, right? So AI becomes even more interesting to us because of how quick we can switch links, how quick can we protect people, how quick we can even If we start looking into a trend, rather than having my superheroes thinking, how else could they say this? Well, guess what? There’s a billion ways that you can misspell or subverse the F word, if not more, right? But when you give it to AI, it’s just right there. We’re not going to spend, because I did spend that time back on the time. like three months trying to figure out what are the names and last names that could be utilized as a thing to, right? Like let’s not put our effort there because there is technology now. So nevermind the fact that it is protecting people around the globe, right? So the benefits that we see with AI, are greater than anything else. And I think the hurdle that we continue to see is people thinking that AI is making that decision and making a call in what goes and what doesn’t. I think that’s also another misconception that we’re trying to push back with. The fact that I say this has to come out of the box, I’m not talking about me deciding what goes and what doesn’t go in the community. I’m talking about the overall understanding that somebody saying I’m looking for a five-year-old girl virgin, it is not okay to be in chat, right? That kind of pieces, that’s what I’m talking about that AI should be able to already protect to begin with.

Greg Posner: 45:55: 46:31: So I have kind of a double question here. You keep mentioning sharing the AI and HI, right? Which is human interaction. And my question is, as AI has been making more of a presence in this space, have you seen the human interaction go down because it’s less necessary? And then the other question which I wanted to ask earlier, which kind of goes with this is that Where does the limit of what a moderator can do end? What happens when you find that case that has to be escalated? Who does it go to? Where does your authority end? Sharon, I don’t know if you want to start that one.

Sharon Fisher: 46:31: 47:33: Okay, so I think that’s another fear, right? Is AI going to take over my HI job, right? And I think that, again, I just see AI as one of the pieces that makes our superheroes stronger. Like, I don’t believe that that’s the case. Now, to be totally transparent, okay, like, of course, that having technology will decrease the amount of cases that a human is going to have to view. That is the point. But what we’re trying to do with this is like, yes, the cases are lower, but how are you going to utilize that time now, moderator? instead of slapping the hands of everyone, what about like, making the community stronger, being more engaged with them, bringing all their kind of even intelligence to you and to the company and the team, right? Like, because we now have more time to focus on the positive. So that’s, that’s the first question. The second question, I have the answer, but I don’t remember the question. So please.

Greg Posner: 47:34: 47:41: Yeah, no, the second question is, you know, when something does get escalated to a human, right? What can the human do? What’s the escalation process from there?

Sharon Fisher: 47:42: 51:23: Yeah, I think this is also very unique to Keywords. And the way that, again, total transparency, when I joined the organization, one of the pieces that I saw is there was a very big gap of what I thought the number of projects and the number of real-life threat escalations were. So once I did a little bit of investigation, The challenge there was education to understand what is a real life threat case, but also what is the process of doing it. What we did today at Keywords is like once you find these cases, and again, most of them are brought up by technology. because we don’t have people just reviewing horrible pictures. Technology in the case of pictures, for example, will take care of like the ponies and rainbows, it will take care of pornography, but then the middle side or the gray area is this a really small bathing suit, for example, those sort of pieces. If the system is telling you or the technology is telling you this is a real-life threat case or within chat or with your voice, you are starting to pick on all these signals, what we do is the superhero neon raps it, I call it, where it’s like give us all the information, all the data that you have available to make sure that we have something put together. Then it goes to a specialized real life threat team that we have a keywords that has been trained on the different topics to make sure that they are able to add more into these wrapping and usually get back to the superheroes and their leadership to ask for more information. And I will talk a little bit more about the challenge of sharing information and all of that. You can imagine it already. But once we have this, then we have become so good at Keyword Studios that actually it was the FBI like five months ago that reached out to us and say, hey, what are you doing? Because these cases are like really well put together, giving us really the trust. And I say it really humble. Now we have a direct line with the FBI. In the past, what we used to do is like, OK, I don’t know, Oakland, California, I have this case. Can you please help us? No, we can’t help you because you are not in Oakland, California. What’s an IP? No, this is just happening in the internet. We cannot support that. So the gap of action was really big. And as you can imagine, it was very frustrating because gaming companies paying this money. We are putting our superheroes in the middle to try to find it. We find it. We have all the information. And then nowadays, thankfully, again, because of the way that we have been doing things, we have that direct line, which doesn’t guarantee that it will be action on. But it does guarantee that the cases will be looked into, which is mind-blowing for me. And we are very, very proud to know that now we are not, we actually can sleep at night thinking that the case at least has been seen. Because when you go through this process, even for the superheroes, going through it, knowing that somebody is at risk, it is really unsettling to know, to not know if somebody’s gonna do something about it. So that’s also something that really helps our superheroes to close the circle and be like, okay, done. And hopefully, obviously, we don’t get a lot of feedback on what they do, if they do it on all of it. But at least we have that piece that somebody and a professional in the law enforcement area has look into those cases.

Greg Posner: 51:24: 51:53: That’s insane and awesome at the same time knowing that that’s your escalation path and just knowing that that’s a possibility when you’re protecting people online and that’s a relief to know. Chris, same two questions, I’ll say them again. I need to say them again, but has AI helped improve the number of cases or reduce the workload for a moderator to only make sure that they’re looking at the urgent stuff? And then what happens when that stuff comes through and how has it escalated?

Chris James: 51:53: 56:40: Great questions again. So kind of similar to what Sharon said with this, AI has definitely helped in that area a lot. I think the since the basis of our product is basically making sure we can escalate the right things so that moderators can do it. I think it’s pretty evident to me that that’s happening. One thing I will say is that the thing that studios are leveraging a lot is automation. And when I say automation, it mostly means things that are so bad And so obviously bad, that they can just be actioned on a certain way. Like, for example, if someone has a code of conduct in their game that says, we do not allow hate speech, right, you know, that person can be in real time actioned on if they use a hate speech term. And, you know, like Sharon was saying, we do have the nuance to know, to some degree, this is reclaimed language, or this is hate speech, and also what is allowed in X community, and how certain words can sometimes be hate speech, but sometimes be just a total normal thing. Those are things that we can look into. With that tool, though, and since we’ve seen studios leverage automation a lot, a lot of times we find that trust and safety budgets in companies like this are not very large. And for that reason, it’s maximizing the amount of good that a moderator can do in a short amount of time. And I think that, to me, is the biggest difference, is that all of a sudden we go from a moderator not really having the tools and having to look through a whole bunch of stuff having the tools and seeing that stuff. For your other question with how does that stuff get escalated, with the small, I don’t want to say smaller because it still hurts, but with less severe cases, with things like swears and curses and other things, we we tend to escalate to moderators normally. Or if someone has a blanket ban on a specific language, we can automate that out as well, obviously. But really, the value prop for a lot of that stuff is in that gray area, which Sharon was talking about, where we’re not sure if this person is using using a swear with words among friends, or if they’re trying to perpetrate bullying. Having a human look at that stuff is a lot of times the only way that you can tell. And since obviously models have error modes, it can be easy to put full trust in that stuff when really we should always be refining and retraining and just making this technology better so that it can be more accurate. That being said, for escalation in terms of the more severe stuff where law enforcement has to get involved, Modulate certainly has a system for doing that. When we were building our user scoring categories, that was one of the things that we had to sort of make on the fly. We try our best to partner with people like NCMEC and with the ADL and with other resources that just help us not only know where to put it, but also help us escalate those cases to the right authorities. I think a lot of times when you’re dealing with law enforcement, Like Sharon was saying, a lot of times there’s a there’s sort of a barrier to action that you have to pass, where you need just enough information to be able to say, okay, this is definitely this amount of bad, or we are this confident that this sort of thing is happening. And passing that barrier can sometimes be tough. But with some of the nonprofits out there that they’re working with, it can be easier. And also, with just the fact that we have AI, and we have just that backlog of data to look through, to be able to say, be able to investigate and be able to say, this is more than likely a perpetration of something really bad versus this is more likely innocuous. I think the fact that we have that confidence also really helps with that escalation path as well. For us, it’s mostly like if you find it, we have to go through the studio just because they’re the ones who are going to be able to escalate that stuff properly. And they’re the ones who have the PII that we do not have because we anonymize everything with our tool just to protect people’s privacy. So they’re the ones mostly who are going to be able to take care of something like that.

Greg Posner: 56:40: 56:49: Yeah, which makes sense, right? You’re providing the service for the studio, the studio is the one that has to, you could lead them to water, as they say, it’s up to them to decide what to do from there.

Chris James: 56:49: 56:50: Exactly.

Sharon Fisher: 56:50: 57:52: And I also wanted to highlight there, like the importance of in these cases, specifically, time is of the essence, right? So when you think about somebody like threatening to kill themselves or hurt others or hurt themselves, that’s something that cannot go in a queue. and wait until somebody goes and see it if the SLA is 12 hours, right? Like, that’s something that we do not have the luxury of time in this sense. So that’s, again, why technology help us so much on those matters, right? Like, it is very different. And I’m not saying they are not as important. But like, if somebody is like, throwing like racism slurs is very different than like, really, we need to save lives here. Like that’s what we’re talking about and that level. So time with technology is something that people really need to think about, like it’s life or death, as deep as it sounds.

Greg Posner: 57:54: 59:49: It’s a great point to bring up. I’m going to take a real time out here. I’m going to kind of wrap things up because it’s almost been an hour. I’m happy to keep going. Is there anything you guys want to keep talking about or anything you want to bring up? All right. Yeah, I think we got everything that I wanted to say. So we’ve been here for about an hour and this is a really educational podcast for myself and I appreciate the two of you for coming on. Let’s kind of talk about what we looked at, right? We took an understanding of what the trust and safety role is. Why does it exist? Who’s protecting? How is it protecting us? And from what? I think though, from what is an amazing thing to understand, right? The F word, it’s not just derogatory terms, it’s trafficking, it’s grooming, it’s all this other scary stuff. The day in the life of a moderator, right? It’s not an easy job. From the outside looking in, you might think they’re a customer support rep that’s just taking a look at bad pictures, but there’s a lot that goes into it. And I appreciate both of you for sharing your stories about that. I appreciate what each of your companies are doing and the message you’re trying to spread on how to make sure that mental health and how to make sure you protect these individuals who are in these roles and how AI and safety can help understand that how the escalation path works, how quickly you can start to, as you just said, Sharon. time makes a big difference. How quickly can you react? I think that’s all important stuff. I think this is fantastic. And this was a bit, again, just a really great learning experience for me. I think the one most important thing to me, I learned that you said it in the beginning, Sharon, and Chris, you alluded to it as well as like, we are all a team, whether it be keywords or modulate or 5CA or other competitors, right? Like, everyone in this role, we want to make a difference. It’s for the greater good of all of mankind together. So it’s important to create these partnerships together to make sure that you can understand and share the technology and how everyone can be a superhero. So so I appreciate both of you again, and I’ll give you some final words before we sign off. So Chris, thank you for jumping on today. Is there anything you’d like to share and let us know how we can find you?

Chris James: 59:49: 01:00:21: Oh, thank you. Yeah. So I’m just Chris James on LinkedIn and on Instagram. And I just want to say thank you for having us on. It’s been great to talk with both of you. I think this is a really, really important issue. And I’m happy that we have this platform to do so. And I’m happy we have Sharon as well here because she has just so much experience. But I also just thank you, Greg, for hosting and keeping us here. Really happy to talk about this stuff.

Greg Posner: 01:00:21: 01:00:22: Thank you, Sharon.

Sharon Fisher: 01:00:24: 01:02:17: Sharon Fisher on LinkedIn, you don’t want to see my social media, that’s personal, but trust and safety here. But I think that the message that I want to leave with is moderation is not the evil of the of the of this game, literally. But also it is not the savior of all of it. So do you have when you are looking into what your kids are going to play? What you’re going to allow to to in your house and not do you really have to personally I have a nine and a 13 year old The 13-year-old is not too happy with me right now because he’s not in every single social media that everybody is. But I think a rule of thumb for me is I need to play the game that he wants to download. I need to look into the advertisement. I need to look into the mechanics of the game. Can they talk to each other and stuff? do that first as a parent. And I’m talking obviously on the personal side of things. But even if you are doing it for yourself, like looking to that first to protect that will be the first layer. And then hopefully you can do a little bit more of digging into is this app game actually moderator is that actual technology that will protect my kid my or myself from this toxic content. That’s a holiday message that I give. But while you are trying to figure out what you’re buying for your kids. Other than that, superheroes, again, should be around the globe, seen by who they are, but also supported as they can continue to protect us. And the real life work from all of the challenges that we just talked about. And also very grateful to have Chris with me because I think that now you can tell why Modulate and Keyword Studios are like such good partners.

Greg Posner: 01:02:17: 01:02:50: Yeah, I think it was really informational. I think that’s a great point, Sharon. I think for anyone out there buying a system, getting new games for the holiday season, have fun, know that there’s superheroes in the background that are looking out to make sure you’re safe. But at the end of the day, it also comes down to you or your parents or making sure that they are looking at the content you’re playing. Because no matter what, no matter what’s happening in Call of Duty, if you’re too young, you probably shouldn’t be seeing what’s happening in Call of Duty period. So great stuff by everyone. Thank you everyone for listening today. I hope everyone has a great holiday season and I hope you have a great rest of your day.

Greg Posner

Avid gamer with a passion for storytelling. My goal is to unpack the narratives of customers, partners and others to better understand how industry-leaders tackle today's challenges.

View all posts

Subscribe

Subscribe to keep your game strong with the freshest Player Experience insights from the industry's finest. 🎮

Community Clubhouse @ GDC

Player: Engage

Reserve your spot now to join the ultimate destination for enhancing player experience and support, ensuring trust and safety, boosting community engagement, achieving compliance, focusing on player-centric game development, and driving revenue growth.