The background: UK could ban social media over suicide images, minister warns
I'm wondering how it differs from saying the UK will ban trains if the train operators don't stop people throwing themselves under their Intercity engines. Are facebook, instagram and pinternet, to name those mentioned in the article, meant to employ millions of mental health experts to preview every post for its potential health implication?
The long-term answer is that these companies are deploying deep neural nets to recognize potential harm and intervene automatically at the moment of posting, but these nets have to learn what's harmful and what isn't. How does anyone know what's harmful? Either they're experts in mental health or they learn from experience. If you let a neural net follow the online trail of people who suicide then the neural net can eventually predict the potential harm of any new image. If you give the neural net the power to censor all such material on the basis of its prediction then the imagery and text visible through that portal might be reduced. Does that apply to ForumGarden too? Are we meant to develop and deploy that sort of tool here?
I think the minister is talking out of his arse, if anyone would like my opinion.
Bookmarks