If you’ve been on the Internet talking to strangers for any length of time, you’ve probably gotten into a flame war or seen one inexplicably happen in the middle of an otherwise normal group discussion. While sometimes entertaining, they’re incredibly disruptive.
Whenever online discussions devolve into heated arguments, typically caused by misunderstandings and miscommunication, the overall value of a social media platform for its users and owners plummets.
About 15 years ago, I spent an inordinate amount of time volunteering to moderate a large web forum. We ran into the same issues day after day, but there was no automation to stop them. Recently, I was on Twitter and someone started an unrelated flame war in the middle of a conversation I was having with about 50 other people about a technical topic. The same fairly predictable recurring problems that made it hard to run a web community 15 years ago are still common today across not just old school discussion forums but every modern social media platform.
I recently got a resolution notification from Twitter about a post I had flagged in another flame war more than a month prior. Why did that take a month?! There's no good mechanism to immediately "call for an adult" when people start throwing tantrums online. So, the idea for this tool, the Harmony chatbot, is rooted in a persistent issue that plagues millions of users and pulls significant resources out of massive companies every hour of every day. Let's help them solve it.
The solution is to create a chatbot with moderation rights that uses a decision tree-based on professional mediation and de-escalation techniques that will allow the bot to intercede in escalating disputes. When a conversation heats up, users flag the conflict, and our "cooler bot" takes the temperature back down by separating the users in conflict, and either helping them to talk out their problems or by suspending their posting privileges for a set period of time as a last resort.
Are the users arguing over a news story? Harmony can pull up a link to a Snopes article or identify material from satyrical websites. Does the instigating user seem like they just want attention? Harmony will have strategies to help the user feel better, and not just silence them to cover up the problem. It also logs user behavior and alerts administrators to address users causing the most recurring problems directly, perhaps even helping them to connect users with low-cost professional counseling services if a serious behavior problem is detected. We'll identify other issues that Harmony might e able to resolve as we proceed through the development process. This is going to be a very helpful bot connected to a lot of great tools and resources, allowing it to help millions of people.
Collectively, social media companies spend hundreds of millions of dollars and millions of man-hours per year on what is essentially ineffective moderation of user-generated content. Even when content is flagged, using current approaches, the worst offending users don't really learn to behave better, they just learn to avoid getting caught. There is a huge opportunity here to create a very progressive tool. The endgame business model is selling API access licenses or straight-up code licenses to enterprise-class clients, but we will also make low-cost access available to non-profits and companies with smaller user bases, possibly with a branded WordPress plugin and other similar pre-built tools.