For the purposes of this, “AI” generally refers to LLMs / ChatGPT.
- Moderating form submissions – WSForms apparently integrates with an OpenAI endpoint for content moderation and can reject submissions that contain harmful content (imagine if social networks employed this!)
- Knowledge base chatbots – Seth Godin and Laracasts now have chatbots trained on their own content that can respond to questions. The Seth Godin one displays relevant articles from the site while it summarises a response and retains the reference links afterwards
- Training an LLM on your digital journal so you can ask it questions – imagine if your children or grandchildren could ask your journal a question to know you or the events of your life better… what would be the optimal way of writing journal entries be to paint a true picture of yourself?
- Transcribing fictional conversations between historic or figures of note – suggested by Matt Mullenweg in an interview I watched. He suggested you could take AI trained on several individual’s bodies of work, their blogs for example, and then prompt an AI to script out the conversation between those people if they were asked a question on a specific topic. It could then answer almost as an avatar for each individual based on their own writing
What I’d really like to see is the major social networks making use of AI to seriously tackle moderation and disinformation.
AI tools will be able at scale to filter hate speech, harmful images and propaganda content in a way that human moderators can’t. I think ideally you’d use the AI tools to ring fence things it flags as potentially harmful immediately so that it can’t be seen by anyone else. Your options as the poster would be to either delete said content or appeal the AI decision which would then flag it up for human moderation. This kind of thing is never simple so I imagine I’m missing a few things here, but this could be a start.