Not sure how long this has been around, but I just learned this was a thing.
Call me paranoid, but it’s only a matter of time before this gets horribly abused. Sure, now it’s (arguably) for a good cause. Although I wonder how effective it even is for suicide prevention, it sure didn’t help the girl in the article much, she just got smarter about it.
But who gets to decide what gets flagged, why it gets flagged, and the repercussions of that? For now it’s something everyone can get behind – mental health, but did anyone ask for this? Did anyone sign up for it? It sounds like they just arbitrarily made the decision to enact this. So then what’s to stop them from arbitrarily adding something else to monitor – Drugs, porn, and then more.. Alerting abusive parents to their teen’s social life as they try to form a life away from them, in the guise of “protecting them from grooming”? It could even undermine attempts to move away from an unsafe environment because those same keywords could be triggered to prevent some sort of abduction or something.
Not to mention how icky it is to me that some corporation is saving all this mental health info about a large number of children. Sounds like somehow they’re skirting around HIPPA.
LSeww on
Haha, classic: spy on 100% of teenagers under a pretense of helping 0.05%
PuddlesRH on
There is a thin line between free will and behavior programming and/or censorship. I hope this stays on the right side.
evilspyboy on
Honestly, using a large language model as a blackbox device is probably one of the few preferred ways to handle and monitor social media. As long as it is a blackbox and it’s escalation path is to hand off to a say linked parent or guardian account… BUT if it does that it also needs to have the ability to override and escalate to a protective services type in case the harm that the LLM flags is actually from the parent or guardian.
I’m in Australia, the social media ban forced me to think of a couple of specifics that are less harmful than the full ban. This was the top of the list in terms of using technology, there are a lot of ways to go about something not all of them involve tech or are sole tech answers.
Probably should add the above blackbox would also need to come with legislation preventing advertising to children/underage to prevent the LLM from being used for… anything else.
Lord_Stabbington on
Whatever the reason for its existence, however it is applied, someone will find a way to exploit it.
5 Comments
(Repost to fix title)
Not sure how long this has been around, but I just learned this was a thing.
Call me paranoid, but it’s only a matter of time before this gets horribly abused. Sure, now it’s (arguably) for a good cause. Although I wonder how effective it even is for suicide prevention, it sure didn’t help the girl in the article much, she just got smarter about it.
But who gets to decide what gets flagged, why it gets flagged, and the repercussions of that? For now it’s something everyone can get behind – mental health, but did anyone ask for this? Did anyone sign up for it? It sounds like they just arbitrarily made the decision to enact this. So then what’s to stop them from arbitrarily adding something else to monitor – Drugs, porn, and then more.. Alerting abusive parents to their teen’s social life as they try to form a life away from them, in the guise of “protecting them from grooming”? It could even undermine attempts to move away from an unsafe environment because those same keywords could be triggered to prevent some sort of abduction or something.
Not to mention how icky it is to me that some corporation is saving all this mental health info about a large number of children. Sounds like somehow they’re skirting around HIPPA.
Haha, classic: spy on 100% of teenagers under a pretense of helping 0.05%
There is a thin line between free will and behavior programming and/or censorship. I hope this stays on the right side.
Honestly, using a large language model as a blackbox device is probably one of the few preferred ways to handle and monitor social media. As long as it is a blackbox and it’s escalation path is to hand off to a say linked parent or guardian account… BUT if it does that it also needs to have the ability to override and escalate to a protective services type in case the harm that the LLM flags is actually from the parent or guardian.
I’m in Australia, the social media ban forced me to think of a couple of specifics that are less harmful than the full ban. This was the top of the list in terms of using technology, there are a lot of ways to go about something not all of them involve tech or are sole tech answers.
Probably should add the above blackbox would also need to come with legislation preventing advertising to children/underage to prevent the LLM from being used for… anything else.
Whatever the reason for its existence, however it is applied, someone will find a way to exploit it.