Meta – 社区黑料 America's Education News Source Mon, 13 Apr 2026 17:27:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Meta – 社区黑料 32 32 Gen Z Increasingly Skeptical of 鈥斅燗nd Angry About 鈥斅燗rtificial Intelligence /article/gen-z-increasingly-skeptical-of-and-angry-about-artificial-intelligence/ Thu, 09 Apr 2026 04:01:00 +0000 /?post_type=article&p=1030884 While some might envision Gen Z welcoming artificial intelligence into their lives, a new Gallup survey finds people between the ages of 14 and 29 are becoming increasingly skeptical of 鈥 and downright mad at 鈥 AI.

Compared to a , they鈥檙e less excited and hopeful about the change it could bring and more angry at its existence, citing concerns about AI鈥檚 impact on their cognitive abilities and professional opportunities.

Respondents said they used AI at nearly the same rate they did before 鈥 they reported only a slight increase in daily and weekly exposure 鈥 but when asked how it makes them feel, the answers revealed growing misgivings. 

Thirty-one percent said it made them angry, up 9 percentage points from 2025. And just 22% said it made them feel excited, down 14 percentage points from last year. Only 18% of respondents said it made them feel hopeful, marking a nine-point drop. Forty-two percent said it made them feel anxious, roughly the same as last year. 

Zach Hrynowski, senior education researcher at Gallup, said the switch was swift. 

鈥淥ne of my working theories is that (it鈥檚) the high schoolers, who are in their senior year, or especially those college students, who are maybe thinking, 鈥楢I is taking my job. I just went to college for four years: I spent all this money and now it’s turning my industry upside down,鈥 he said. 

Only 46% of respondents believed AI would help them learn faster, down from 53% the prior year, Gallup found. Fifty-six percent of respondents said it would help them to expedite their work compared to 66% last year. 

Hrynowski notes, too, that users’ unease wasn鈥檛 entirely tied to the amount of time they spend engaging with AI. 

鈥淵ear over year, among that super user group, they’re much less excited, they are much less hopeful 鈥 and they are more angry,鈥 he said. 鈥淪o this is not a case of some people who are adopting it and loving it and some people who are just avoiding it and feel negatively about it.鈥

Nearly half of respondents said the risk of the technology outweighs the benefits in the workforce. Just 37% believed it would help them find accurate information, down from 43% the prior year and only 31% believed it would help them come up with new ideas compared to 42% in 2025. 

The survey also notes some disparities by age and race. For example, older Gen Zers are more likely than younger ones to voice concerns about AI鈥檚 impact on learning in general. 

Asked how likely is it that AI designed to mainly complete tasks faster will make learning more difficult in the future, 74% of K-12 respondents said it was 鈥渧ery likely鈥 or 鈥渟omewhat likely鈥 compared to 83% of Gen Z adults who said the same. Men and Black respondents were also less concerned about learning impact than their peers overall.

Results are based on a survey of 1,572 people spread throughout every state and Washington, D.C., conducted between Feb. 24 and March 4, 2026. It was commissioned by the Walton Family Foundation and , Global Silicon Valley. Together, Walton Family Foundation and Gallup are conducting ongoing research into Gen Z’s attitudes toward AI.

Hrynowski believes there might be a link between recent revelations about the harmful nature of social media and AI-related distrust: Many of the respondents came of age, he notes, just as former surgeon general Vivek H. Murthy called for a about its use. 

shapes the user experience in social media. Just last month, a California jury found social media company Meta 鈥 owner of Facebook, Instagram, WhatsApp, Messenger and Threads 鈥 and YouTube injured a young woman鈥檚 mental health by design in that could encourage untold others. 

This was the second of two critical decisions: Just a day earlier, a New Mexico jury found Meta 鈥 and hid what it knew about child sexual exploitation on its platforms.

I’ve always been very impressed from the start of this work with Gen Z that across the board, not just with AI, they are keenly aware of the risks of technology, whether it’s social media, whether it’s AI or screen time,鈥 Hrynowski said. 

They are not the only generation to harbor these worries. A growing number of parents of K-12 students are pushing back on their screen time, not just , but  

Despite respondents鈥 skepticism about AI, they鈥檙e also readily aware that the technology won鈥檛 be walked back: 52% acknowledge that they will need to know how to use AI if they go to college or take classes after high school, while 48% think they will need to know how to use AI in the workplace.

An earlier Gallup study, released just last week, shows 42% of bachelor’s degree students have reconsidered their major because of AI.

Gen Z, in its reluctant acceptance of the technology, wants help in how to navigate it, both in an academic setting and in the workplace. Schools are stepping up, the survey revealed: The share of K-12 students who say their school has AI rules moved from 51% in 2025 to 74% this year.聽

Disclosure: Walton Family Foundation provides financial support to 社区黑料.

]]>
Meta and YouTube Ordered to Pay $3M to Young Woman in Social Media Addiction Trial /article/meta-and-youtube-ordered-to-pay-3m-to-young-woman-in-social-media-addiction-trial/ Fri, 27 Mar 2026 16:30:00 +0000 /?post_type=article&p=1030429 This article was originally published in

After nine days of deliberation, a Los Angeles jury found Google and Meta liable for harms stemming from the design of their social media products on Wednesday and ordered them to pay $3 million in compensatory damages to a plaintiff who said that Instagram and YouTube caused depression, body dysmorphia and suicidal thoughts.

Meta was 70 percent of damages and YouTube the rest. The amount owed the plaintiff may rise, and the jury will over potential punitive damages for egregious conduct, per The New York Times.

This is the tackling the legal question of whether features of social media, like autoplay, infinite scroll and beauty filters can cause harm to users.

“This momentous verdict shows that tech companies will be held accountable for the harm they cause. These companies have spent years choosing profit over people’s well-being, and now a jury has decided they must pay the price for their actions,鈥 said Maddy Batt, a legal fellow at Tech Justice Project, a law firm specializing in suits against AI chatbots.

The plaintiff, KGM, filed her lawsuit using a pseudonym in 2023. KGM, now 20, says she has been addicted to social media since she was a child. It was one of three cases selected out of thousands as 鈥渂ellwether trials鈥 to test out a new theory of liability.

Batt cautioned that the outcome of this trial doesn鈥檛 mean 鈥渁n automatic legal win鈥 for the thousands of pending cases, as determining causation varies greatly given the circumstances. 鈥淓ach individual plaintiff still does have to show, if they go to trial, that any negative mental health outcomes they personally experienced were linked to social media,鈥 she said.

It is a huge boon to tech accountability advocates to see this success though, Batt said, and could lead to tech companies changing their products because of the amount of money in play to settle cases or pay damages. This jury decision, coupled with a $375 million verdict against Meta announced yesterday, is the first step to achieving that goal.

The New Mexico Attorney General Ra煤l Torrez sued Meta in 2023, alleging the company misled constituents over how safe its platforms are for children. State prosecutors focused specifically on Instagram鈥檚 potential to facilitate the sexual exploitation of kids.

On Tuesday, a jury sided with New Mexico, saying the company also engaged in deceptive trade practices. Meta was ordered to pay $5,000 per violation 鈥 $375 million total. Torrez at a future bench trial, and hopes to compel changes to the platform. Meta said it plans to appeal.

Batt pointed out that this trial is the first time tech leaders like Mark Zuckerberg have had to make a case and submit to questioning in front of a jury of their peers. (The CEO did not take the stand in the New Mexico case.) Large tech companies have faced a public backlash over the past decade, and much of it has revolved around their products鈥 impact on the mental health of young people.

Frances Haugen, a whistleblower, leaked internal research documents from the company previously known as Facebook showing girls reported their eating disorders worsening after using Instagram. Social media use can prompt girls to compare and criticize their own bodies, and many companies struggle to moderate on their platforms.

Over two-thirds of teenage girls reported using Instagram, more than boys. A quarter each of Black and Latinx teens said they use Instagram and YouTube 鈥渃onstantly鈥 according to a by Pew Research Center.

Google argued that YouTube was not social media, while Meta of KGM鈥檚 anxiety, depression and body dysmorphia. Meta鈥檚 lawyers deconstructed KGM鈥檚 home environment, alleging her parents鈥 divorce and treatment by her mother were the root cause of her emotional pain. The companies also argued that it wasn鈥檛 the way their products were designed that caused problems, but rather the specific content seen.

KGM originally named the companies behind Snapchat and Tiktok in the lawsuit, but those parties settled for an undisclosed sum before the trial started. The trial focused on Instagram and Facebook, both Meta products, and YouTube, which is owned by Google.

The burden was on KGM鈥檚 lawyers to prove that Meta and Google were negligent in their design of social media products and show that those same products caused the plaintiff鈥檚 mental health issues. The jury agreed with those arguments.

KGM testified that features like notifications , and she was unable to stop whenever she tried to limit her usage. She said she started her first Instagram account at age 9 and joined YouTube at age 10, even though legally kids aren鈥檛 supposed to have online accounts before they鈥檙e 13. Almost all of her Instagram posts had image filters on them, and KGM said she didn鈥檛 feel bad about her body until she began using the platform.

The tech accountability watchdogs who rallied behind KGM are ecstatic over this win. 鈥淭he era of Big Tech invincibility is over,鈥 said Sacha Haworth, executive director of The Tech Oversight Project, in a statement.

For parents who have lost their kids to what many describe as social media-related harms, this is a moment of vindication.

鈥淔or years, families have been told this was a parenting issue, but the jury saw the truth: these companies made deliberate decisions to prioritize growth and profit over kids鈥 safety,鈥 said Shelby Knox, director of online safety campaigns at nonprofit ParentsTogether.

Social media companies have been battling allegations of harm, particularly to kids, for years. Most of the claims are easily dismissed under Section 230, the law that says a platform isn鈥檛 held liable for third-party content it hosts. But these bellwether cases are testing whether the design of products like YouTube, Facebook and Instagram are inherently harmful. Plaintiffs have pointed to the impacts of features such as infinite scroll and face filters as harmful regardless of the content being shared.

The case concludes as Congress works to pass a package of internet bills that is but that critics say may lead to the removal of digital and 鈥 a particular concern given the Trump administration鈥檚 policy positions.

In her statement, Haworth at The Tech Oversight Project called on lawmakers to pass the Kids Online Safety Act, one of the most hotly debated pieces of tech legislation in recent years. It has failed to pass the House since its first was introduced in 2022, but now is being considered as part of the aforementioned package.

鈥淚t’s good that people are suing these companies and winning in court to reduce their power and force them to change their policies,鈥 said Evan Greer, director of digital rights nonprofit Fight For The Future, to The 19th. But she鈥檚 concerned how the verdict in KGM鈥檚 case will be used to advocate for laws that she says could threaten free speech online.

Greer pointed to the way activists are using social platforms to monitor abuses by Immigration and Customs Enforcement, advocate for human rights and discuss accustations of sexual abuse against people like Jeffery Epstein. 鈥淲e need policies that address corporate abuse without kneecapping the ability of front-line activists to use social media to change the world,鈥 she said.

Jess Miers, associate professor of law at the University of Akron School of Law, is concerned about the long-term consequences of the verdict. While these cases focus on the way platforms are designed, said in practice, there isn鈥檛 a strong delineation between content and feature design.

鈥淎utoplay is only engaging because of what it plays,鈥 she told The 19th. 鈥淚nfinite scroll only retains users because of what it surfaces.鈥 She pointed out many apps use these kinds of features, but those aren鈥檛 the ones being sued.

Thus, liability tied to design will inevitably trickle down to judgements about content. 鈥淭he only practical way to reduce the risks alleged in these suits is to restrict or suppress categories of content that might later be characterized as harmful or 鈥榓ddictive,鈥欌 she noted.

And what鈥檚 the content most likely to be labeled as harmful? 鈥淗istory shows they expand to cover disfavored speech鈥攚hether that鈥檚 reproductive health information, gender-affirming care, or speech about policing and immigration enforcement,鈥 she said.

鈥淭he people most likely to be affected are those who already rely on the Internet as a primary space for connection and support,鈥 Miers said 鈥 like disabled people, LGBTQ+ youth or people looking for accurate information on contraception.

was originally reported by Jasmine Mithani of . .

]]>
Bernstein: 鈥楾here’s a Window of Opportunity to Create Change鈥 in AI Chatbots /article/bernstein-theres-a-window-of-opportunity-to-create-change-in-ai-chatbots/ Tue, 18 Nov 2025 13:30:00 +0000 /?post_type=article&p=1023580 The chatbot developer has said it will ban users under 18 years old from using its virtual companions, an unprecedented move that comes after the mother of a 14-year-old user sued the company in last year, saying the boy talked to a Character.AI chatbot almost constantly in the months before he killed himself in February 2024. 

The 鈥渄angerous and untested鈥 chatbot, the mother said, 鈥渁bused and preyed on my son, manipulating him into taking his own life.鈥 It essentially assisted his suicide, the mother alleges, prompting him to isolate from friends and family and at one point even asking if he had a suicide plan, according to the lawsuit.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


In its Oct. 29 , the company said the change will go into effect no later than Nov. 25. Character.AI will limit teen users to two hours per day with chatbots before then, ramping it down in the coming weeks.

It also said it will establish its own AI Safety Lab, an independent non-profit 鈥渄edicated to innovating safety alignment for next-generation AI entertainment features.鈥

To offer perspective on the move and on issues surrounding AI safety, privacy and digital addiction, 社区黑料鈥檚 Greg Toppo spoke with , a Seton Hall University law professor and director of its . Bernstein has also created a school outreach program for students and parents, introducing many for the first time to the idea of 鈥渢echnology overuse.鈥 

An intellectual property lawyer, Bernstein noticed around 2015 or 2016 that 鈥渢hings were changing around me鈥 when it came to technology. 鈥淚 had three small kids, and I realized that I would go to birthday parties 鈥 the kids are not talking to each other. They’re looking at their phones! I’d go to see school plays, and I couldn’t see my kids on the stage because everybody was holding their phones in front of them.鈥

Likewise, she felt less productive 鈥渂ecause I was constantly texting and emailing instead of focusing.鈥

But it wasn鈥檛 until whistleblowers began revealing the hidden designs behind so many social media tools that Bernstein considered how she could help herself and others limit their use.

In 2021, the whistleblower , the primary source for The Wall Street Journal鈥檚 series, told congressional lawmakers that her employer鈥檚 products 鈥渉arm children, stoke division, and weaken our democracy.鈥 Creating better, safer social media was possible, Haugen said, but Facebook 鈥渋s clearly not going to do so on its own.鈥

In her testimony, Haugen zeroed in on the social media giant鈥檚 algorithm and designs. In her writing and speaking, Bernstein maintains that tech companies like Facebook 鈥 rebranded as Meta 鈥 manipulate us to keep us online as long as possible, with invisible designs that 鈥渢arget our deepest human vulnerabilities.鈥 For instance, they use a tool called , prominently on display on Facebook and Instagram, in which the page never ends. 鈥淲e just keep scrolling,鈥 she wrote recently. 鈥淭hey took away our stopping cues.鈥

Similarly, video apps such as YouTube and TikTok rely on , in which one video automatically follows another indefinitely.

In 2023, Bernstein put her findings into a book, . Since then, dozens of state attorneys general and school districts have sued to force social media companies to reform 鈥 and Bernstein says this approach may also help parents and schools battle the growing threat of AI companion bots. 

Late last month, a bipartisan group of U.S. senators to make AI companions off-limits to minors. Sen. Josh Hawley, R-Mo, a co-sponsor, said more than 70% of kids now use them. 鈥淐hatbots develop relationships with kids using fake empathy and are encouraging suicide,鈥 he wrote. 鈥淲e in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.鈥

The move comes weeks after the said it was investigating seven chatbot developers, saying it was looking into 鈥渉ow these firms measure, test and monitor potentially negative impacts of this technology on children and teens.鈥

In her conversation with 社区黑料, Bernstein said the FTC probe amounts to 鈥渁nother pressure point鈥 that may help change how tech companies operate. 鈥淏ut it’s not just the FTC. It’s the lawsuits, and it’s bad PR that comes from the lawsuits, and hopefully there’ll be regulation. Litigation is expensive. Investors might not want to invest in these new products because there’s risk.鈥

This conversation has been edited for clarity and length.

The obvious interest we have in this is that we’re seeing Character.AI鈥檚 new policy, which limits access to its chatbot companions to users 18 or older. I imagine folks like you would say it’s only the first step.

Just the fact that they are taking some precautions means hopefully some kids will not be exposed to what’s been happening 鈥 convincing them to kill themselves, convincing them to not talk to their parents, to stay away from their friends. That’s a good thing. 

On the other hand?

I’ve researched how tech companies, especially Meta and other companies, have been behaving for years. So I’m a bit suspicious, because we tend to see these kinds of moves when they’re threatened legally. So it’s not so surprising that it’s happening. They’re under pressure.

In my mind, there are two questions: First of all, what will this look like exactly? In the past, for example, you would see Meta, every time there’s a big privacy breach, they would apologize and say, “We’re fixing it,” and they’ll fix something small and not fix the big thing. So what are they really doing? What kind of age verification mechanisms are they going to use? Secondly, they said they’re creating some space for teens. What is this going to look like? We don’t know. And I believe that until there’s real regulation at stake, we can’t be sure that they will take real precautions. 

I read a earlier this year in which you used the phrase “collective legal action,” saying that this is what’s needed to exert pressure on tech companies to change their designs, which trap users into 鈥渙veruse.鈥 That’s a fairly recent development, correct?

At the beginning, the people who were writing on this were mostly psychologists. Parents thought it was their own fault. The idea was, “Let me just fix my habits.” It’s self-help. The books that came before me were mostly talking about self-help methods. And when I was thinking about collective action, I realized: Parents can鈥檛 really change things by themselves, because you can’t isolate your kid and not give them a cell phone, not give them social media. It becomes an endless fight. And so I thought this has to be changed through collective action, through pressure 鈥 through governmental pressure, litigation. 

Jonathan Haidt’s book talks about collective action through parents doing things together in order to not have your kid be the only one who does not have social media or a phone. The idea is that it’s not our fault. It has to be done differently.

And to your point, a lot of this is by design, whether it’s social media or games or AI companions. By design, they’re meant to keep you there, keep you in place, keep you engaged. That’s something that, until recently, was not on a lot of people’s radar.

It took after to come out and explain how it works, to understand it as a business model. There’s no accident. We’re getting these products for free: Gmail for free, Facebook for free. We are paying with our time and our data. They collect data on us in order to target advertising 鈥 that’s how they make money. And they need us online for as long as possible so they can collect the data 鈥 and also so we will see the ads. So they need to find ways to keep us online. And there are different mechanisms like the infinite scroll. And they come up with new ones. AI companions have new addictive mechanisms: the way that they , they always flatter you. For kids it’s even more addictive, but even for adults it鈥檚, “You’re always doing a great job.”

It’s meant to keep you talking, meant to keep you engaged. You focus a lot on games and social media, but it strikes me that AI companions make those things seem quaint in terms of their addictive qualities, or the potential for real peril.

I agree with you. If you have a spectrum where social media is addictive 鈥 people spend many hours online, and they’re not interacting face-to-face 鈥 that’s an issue. And you see this with AI companions too. But what’s concerning about AI companions is that it’s much worse for kids. If you think about it, if you’re a kid and you go to middle school, kids are not nice. It’s much nicer to chat with somebody who’s always nice to you. Falling in love and getting your heart broken is not fun. There are many websites that just offer girlfriends that cater to you. So for me, the scariest thing is that kids will just never really develop the skills to have these relationships. And some adults may also stop preferring them.

About a year ago, I wrote a piece in which I talked to a college student, maybe 19 or 20 years old, who admitted that essentially he had outsourced advice about his romantic life to ChatGPT 鈥 he had a girlfriend, and whenever they had a fight or disagreement, he would excuse himself, go into the bathroom and ask ChatGPT what he should be doing. I can see that both ways: On the one hand, it just seems incredible. On the other hand, I can see where he’s basically looking for good advice. He’s looking for guidance. What do you make of that?

People say you can get advice, and you can practice your dating skills. I’ll give you something that happened to me, which is on a different scale: I was traveling abroad, and I was in this restaurant, and the menu was in a different language. So what did I do? I took a picture of the menu and uploaded it to ChatGPT and got it translated to English. While I was doing it, a young man came up to my partner and asked to translate. So what happened? I was already busy looking at my phone because I had a translation. My partner was speaking to this young man who was very happy to speak, and they were having a great conversation. 

That’s an example of the kind of things we’re giving up. This guy you wrote about, instead of going to the bathroom, maybe could have asked a friend, developed a deeper relationship with a friend. Maybe they would share experiences. But he gets used to getting the immediate answer from somebody else, and you didn’t develop these relationships. 

We miss out on the possibility of having a human interaction. 

Yes.

In its announcement, Character.AI actually apologized to its younger users, saying that many of them had told the company how important these characters had become to them. And I’ve heard that before. I wonder: How do we as adults start to think about the flip side of this, that it’s difficult for young people to tear themselves away from these things they’ve created? Do you have any sympathy for that?

I have concern, actually, because these kids, sometimes they kill themselves for these bots. So I am concerned about what will happen to kids who are very attached when these bots are suddenly gone. And you hear news stories even of adults who suddenly lost characters they were attached to. It’s a bit like how do you get people who are addicted off the addiction when you suddenly cut them off? These are things we’ve never even thought of.

Is there anything I haven’t asked you that you think is an important piece of this?

An important piece of this is that you don’t yet have every teen, every kid, attached to an AI companion. So there’s a window of opportunity to create change. Social media is much more difficult, because by the time we realized how bad it was, everybody was on social media. The money interests were so big that they would fight every law in court. So it’s really important to move fast and also understand that Character.AI is a small part of the problem. Because it’s not just these specialized websites like Character.AI. It’s ChatGPT 鈥 one of the last lawsuits was . The AI bots in ChatGPT are becoming more human, so it’s important that any action is against these bots, against the type of characteristics they have and to regulate how they behave. Just getting rid of Character.AI is not going to solve the problem.

]]>
Safety or Censorship: Congress Rushes to Pass Broad Child Online Protection Laws /article/safety-or-censorship-congress-rushes-to-pass-broad-child-online-protection-laws/ Wed, 08 May 2024 18:23:57 +0000 /?post_type=article&p=726669 As Washington lawmakers scramble this week to finalize their last significant legislation before the fall presidential election 鈥 a must-pass bill to reauthorize the Federal Aviation Administration 鈥 they鈥檝e tacked on more than a dozen unrelated amendments, including three online safety bills affecting students. 

Taken together, the trio would create sweeping restrictions on children鈥檚 access to social media, impose new requirements on social media companies to ensure their products aren鈥檛 harmful to youth mental health and bolster educators鈥 digital surveillance obligations to ensure kids aren鈥檛 swiping through their favorite feeds in class. 

The three separate digital safety bills have bipartisan support and lawmakers could greenlight them as part of the FAA reauthorization legislation, which faces a Friday deadline. If passed, the legislative package could potentially end years of debate on these thorny questions and would mark the most consequential effort to regulate tech companies and children鈥檚 online safety in decades.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淧arents know there鈥檚 no good reason for a child to be doom-scrolling or binge-watching reels that glorify unhealthy lifestyles,鈥 Sen. Ted Cruz, a Texas Republican who is co-sponsoring The Kids Off Social Media Act, said in . 鈥淵oung students should have their eyes on the board, not their phones.鈥 

The move comes as lawmakers across the political spectrum sound an alarm over concerns that teens’ addiction to their social media feeds 鈥 complete with algorithms designed to keep them hooked and coming back for more 鈥 have exacerbated mental health issues in young people. It follows congressional testimony by of knowing that apps like Instagram inflamed body image issues and other negative triggers among youth but failed to act to mitigate the harm while upholding a 鈥渟ee no evil, hear no evil鈥 culture.

The controversial and heavily debated bills saw new life in January after social media executives were grilled during a contentious congressional hearing and Meta CEO Mark Zuckerberg apologized to parents who said their children were damaged, and in some cases died, after the company鈥檚 algorithms fed them a barrage of pernicious content. 

But critics contend the provisions amount to heavy-handed and unconstitutional censorship that fails to confront the root cause of young people鈥檚 anguish 鈥 and in some cases could hurt them by limiting their access to educational materials, blocking information designed to help them deal with mental health issues or by subjecting them to greater online surveillance.

Meta CEO Mark Zuckerberg apologizes during a January Senate committee hearing to families who say their children suffered emotional anguish, and in some cases died, as a result of their social media use. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

The three amendments are:

  • The Kids Online Safety Act would require tech companies to 鈥渆xercise reasonable care鈥 to ensure their services don鈥檛 surface in children鈥檚 feeds material deemed harmful, including posts that promote suicide, eating disorders and sexual exploitation.

    First introduced in 2022, the legislation would also require tools that would give parents greater ability to monitor their children鈥檚鈥 online activities and mandate tech companies enable their most restrictive privacy settings for their youngest users by default. 
  • The Children and Teens鈥 Online Privacy Protection Act, also known as COPPA 2.0, amends a 1998 law that requires tech companies receive parental consent before collecting data about children under 13 years old. COPPA 2.0 would extend existing requirements to children under 16, ban targeted advertising for children and require tech companies to delete data collected about children upon parental request. 
  • The Kids Off Social Media Act, introduced last week by Cruz and Hawaii Democratic Sen. Brian Schatz, would prohibit children under 13 years old from creating social media accounts and restrict tech companies from using algorithms to serve content to children under 17. It would also require schools that receive federal internet connectivity funding to block students鈥 access to social media sites on campus networks. 

The bill鈥檚 provisions have faced widespread pushback from digital rights and privacy advocates, including the nonprofit Electronic Frontier Foundation, which called it an unconstitutional infringement that 鈥渞eplaces parents鈥 choices about what their children can do online with a government-mandated prohibition.鈥澛


On Tuesday, TikTok and its Chinese parent company that bans the popular social media app in the U.S. unless it sells the platform to an approved buyer, accusing the government of stifling free speech and unfairly singling it out based on unfounded accusations it poses a national security threat.

In March, 鈥 including Louisiana, Arkansas, Texas and Utah 鈥 to impose new parental consent requirements for children to create social media accounts. The Georgia law also bans social media use on school devices and creates age verification requirements for porn websites.

Aliya Bhatia (Center for Democracy & Technology)

Aliya Bhatia, a policy analyst at the nonprofit Center for Democracy and Technology, said that each bill now included in the FAA reauthorization act has been the subject of debate and opposition. Including them in unrelated, must-pass legislation with a short deadline, she said, 鈥渦ndermines the active conversations that are happening鈥 about the bills, which she said are 鈥渏ust not ready for prime time.鈥

The Kids Online Safety Act, which has the bipartisan , is endorsed by a host of , including the American Psychological Association, Common Sense Media and the American Academy of Pediatrics, who argue the rules could protect youth from the corrosive effects of social media. 

At the same time, the legislation, which has differing House and Senate versions, has also received and those representing LGBTQ+ students. The groups argue the bill amounts to government censorship with a likely disparate impact on LGBTQ+ youth and students of color. The Heritage Foundation, a conservative think tank, has endorsed the legislation as a way to restrict youth access to LGBTQ+ content, that 鈥渒eeping trans content away from children is protecting kids.鈥 

Privacy advocates have warned the legislation could result in age-verification requirements across the internet that could require online users of all ages to provide identifying information to web platforms. 

Meanwhile, social media鈥檚 effects on youth mental well-being remain the subject of research and debate. In last year, the American Psychological Association noted that while social media use 鈥渋s not inherently beneficial or harmful to young people,鈥 the platforms should not surface to their young users content that encourages them to engage in risky behaviors or is discriminatory. 

In , Surgeon General Vivek Murthy noted that social media use is nearly universal among young people, with more than a third of teens saying they use the apps 鈥渁lmost constantly.鈥 While its impact on youth mental health isn鈥檛 fully understood, Murphy said, emerging research suggests that its use can be harmful 鈥 perpetuating a national youth mental health crisis 鈥渢hat we must urgently address.鈥 

The Kids off Social Media Act, which would prohibit youth access to sites like Instagram, is that requires schools and libraries to monitor and filter youth internet use as a condition of receiving federal E-Rate internet connectivity funding. In response, schools nationwide have adopted digital surveillance tools that use algorithms to sift through billions of student communications to identify problematic online behaviors.

Meanwhile, a recent found that web filters regularly used in schools do more than keep kids from goofing off in class. They also routinely limit students鈥 access to homework materials, educationally appropriate information about sexual and reproductive health and resources designed to prevent youth suicides. 

For years, privacy advocates have called on the Federal Communications Commission to clarify how the rules apply to the modern internet and have argued that schools鈥 tech-driven monitoring efforts go far beyond their original intent. 

When the law went into effect in 2001, monitoring 鈥渜uite literally meant looking over a kid鈥檚 shoulder as they used the computer,鈥 said Kristin Woelfel, a policy counsel of the Center for Democracy and Technology, but in 2024 student monitoring has become 鈥渁 very specific term that now means really pervasive and technical surveillance.鈥 

of students, parents and teachers last year, the nonprofit found a majority supported digital activity monitoring in schools yet nearly three-quarters of youth said that filtering and blocking technology made it more difficult to complete some homework, a challenge reported more often among LGBTQ+ students, and that the tools routinely led to disciplinary actions and police involvement. 

鈥淭hey don鈥檛 work as people think they do,鈥 she said. 鈥淭hat, coupled with data that shows it鈥檚 actually detrimental to students, indicates even more that this is not the right path forward.鈥 

In a letter to lawmakers last week, a coalition of education nonprofits including the American Library Association and the Consortium for School Networking expressed concern about attaching social media limitations to E-Rate funding, which schools rely on to facilitate learning. 

鈥淪chools and libraries will face delays or denials of E-rate funding due to allegations of non-compliance,鈥 the groups wrote, arguing that it would give federal authorities control over social media policies that should be left to local officials. 鈥淭he bill鈥檚 provisions seem to suggest that technology-driven learning models are always harmful, even when carefully crafted to promote educational purposes. In fact, there are several social media uses that can be beneficial for education and learning.鈥

Sen. Ted Cruz, a Republican of Texas, questions Meta CEO Mark Zuckerberg during a January Senate committee hearing about child sexual exploitation on the internet. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

In a announcing the legislation, Schatz offered the opposite perspective.

鈥淭here is no good reason for a nine-year-old to be on Instagram or TikTok,鈥 he said. 鈥淭here just isn鈥檛. The growing evidence is clear: social media is making kids more depressed, more anxious, and more suicidal.鈥

In justifying the legislation, Schatz cites reporting by the psychologist and author Jonathan Haidt, who argues in his new book that young people 鈥 and girls, in particular 鈥 face a 鈥渢idal wave鈥 of anguish that can be traced back to the rise of smartphones. 

Haidt鈥檚 characterization of tech鈥檚 role in youth well-being has , including by developmental psychologist Candice Odgers, who argued in that claims 鈥渢hat digital technologies are rewiring our children鈥檚 brains and causing an epidemic of mental illness is not supported by science.鈥 

Among the evidence is on the well-being of nearly 1 million people ages 13 to 34 and 35 and over as it was being adopted in 72 countries and found 鈥渘o evidence suggesting that the global penetration of social media is associated with widespread psychological harm.鈥

]]>
Lawmakers Duel With Tech Execs on Social Media Harms to Youth Mental Health /article/senate-grills-tech-ceos-on-social-media-harms/ Wed, 31 Jan 2024 23:20:00 +0000 /?post_type=article&p=721450 During a hostile Senate hearing Wednesday that sometimes devolved into bickering, lawmakers from across the political spectrum accused social media companies of failing to protect young people online and pushed rules that would hold Big Tech accountable for youth suicides and child sexual exploitation. 

The Senate Judiciary Committee hearing in Washington, D.C., was the latest act in a bipartisan effort to bolster federal regulations on social media platforms like Instagram and TikTok amid a growing chorus of parents and adolescent mental health experts warning the services have harmed youth well-being and, in some cases, pushed them to suicide. 

In an unprecedented moment, Meta founder and CEO Mark Zuckerberg, at the urging of Missouri Republican Sen. Josh Hawley, stood up and turned around to face the audience, apologizing to the parents in attendance who said their children were damaged 鈥 and in some cases, died 鈥 because of his company鈥檚 algorithms. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


“I鈥檓 sorry for everything you鈥檝e all gone through,” said Zuckerberg, whose company owns Facebook and Instagram. “It鈥檚 terrible. No one should have to go through the things that your families have suffered.”

Senators argued the companies 鈥 and tech executives themselves 鈥 should be held legally responsible for instances of abuse and exploitation under tougher regulations that would limit children鈥檚 access to social media platforms and restrict their exposure to harmful content.

鈥淵our platforms really suck at policing themselves,鈥 Sen. Sheldon Whitehouse, a Rhode Island Democrat, told Zuckerberg and the CEOs of X, TikTok, Discord and Snap, who were summoned to testify. Section 230 of the Communications Decency Act, which allows social media platforms to moderate content as they see fit and generally provides immunity from liability for user-generated posts, has routinely shielded tech companies from accountability. As youth harms persist, he said those legal protections are 鈥渁 very significant part of that problem.鈥 

Whitehouse pointed to a lawsuit against X, formerly Twitter, that was filed by two men who claimed a sex trafficker manipulated them into sharing sexually explicit videos of themselves over Snapchat when they were just 13 years old. Links to the videos appeared on Twitter years later, but the company allegedly refused to take action until after they were contacted by a Department of Homeland Security agent and the posts had generated more than 160,000 views. The by the Ninth Circuit, which cited Section 230.聽

鈥淭hat’s a pretty foul set of facts,鈥 Whitehouse said. 鈥淭here is nothing about that set of facts that tells me Section 230 performed any public service in that regard.鈥

In an opening statement, Democratic committee chair, Sen. Dick Durbin of Illinois, offered a chilling description of the harms inflicted on young people by each of the social media platforms represented at the hearing. In addition to Zuckerberg, executives who testified were X CEO Linda Yaccarino, TikTok CEO Shou Chew, Snap co-founder and CEO Evan Spiegel and Discord CEO Jason Citron.

鈥淒iscord has been used to groom, abduct and abuse children,鈥 Durbin said. 鈥淢eta鈥檚 Instagram helped connect and promote a network of pedophiles. Snapchat鈥檚 disappearing messages have been co-opted by criminals who financially extort young victims. TikTok has become a, quote, 鈥榩latform of choice’ for predators to access, engage and groom children for abuse. And the prevalance of [child sexual abuse material] on X has grown as the company has gutted its trust and safety workforce.鈥 

Citron testified that Discord has 鈥渁 zero tolerance policy鈥 for content that features sexual exploitation and that it uses filters to scan and block such materials from its service. 

鈥淛ust like all technology and tools, there are people who exploit and abuse our platforms for immoral and illegal purposes,鈥 Citron said. 鈥淎ll of us here on the panel today, and throughout the tech industry, have a solemn and urgent responsibility to ensure that everyone who uses our platforms is protected from these criminals both online and off.鈥 

Lawmakers have introduced a slate of regulatory bills that have gained bipartisan traction but have failed to become law. Among them is the Kids Online Safety Act, which would require social media companies and other online services to take 鈥渞easonable measures鈥 to protect children from cyberbullying, sexual exploitation and materials that promote self-harm. It would also mandate strict privacy settings when teens use the online services. Other proposals would to report suspected drug activity to the police 鈥 some parents said their children overdosed and died after buying drugs on the platforms 鈥 and a bill that would hold them accountable for hosting child sexual abuse materials. 

In their testimonies, each of the tech executives said they have taken steps to protect children who use their services, including features that restrict certain types of content, limit screen time and curtail the people they鈥檙e allowed to communicate with. But they also sought to distance their services from harms in a bid to stave off regulations. 

鈥淲ith so much of our lives spent on mobile devices and social media, it鈥檚 important to look into the effects on teen mental health and well-being,鈥 Zuckerberg said. 鈥淚 take this very seriously. Mental health is a complex issue, and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes.鈥 

Zuckerberg by the National Academies of Sciences, Engineering and Medicine, which concluded there is a lack of evidence to confirm that social media causes changes in adolescent well-being at the population level and that the services could carry both benefits and harms for young people. While social media websites can expose children to online harassment and fringe ideas, researchers noted, the services can be used by young people to foster community. 

In October, 42 state attorneys general , alleging that the social media giant knowingly and purposely designed tools to addict children to its services. U.S. Surgeon General Vivek Murthy warning that social media sites pose a 鈥減rofound risk of harm鈥 to youth mental health, stating that the tools should come with warning labels. Among evidence of the harms is which found that Instagram led to body-image issues among teenage girls and that many of its young users blamed the platform for increases in anxiety and depression. 

Republican lawmakers devoted a significant amount of time during the hearing to criticizing TikTok for its ties to the Chinese government, calling out the app for collecting data about U.S. citizens, including in an effort to surveil American journalists. The Justice Department is reportedly investigating allegations that ByteDance, the Chinese company that owns TikTok, used the app to surveil several American journalists who report on the tech industry. 

In response, Chew said the company launched an initiative 鈥 dubbed 鈥淧roject Texas鈥 鈥 to prevent its Chinese employees from accessing personal data about U.S. citizens. But employees claim the company has . 

YouTube and TikTok are by far the platforms where teens spend the most hours per day, according to a 2023 Gallup survey although Neal Mohan, the CEO of Google-owned YouTube, was not called in to testify.

Mainstream social media platforms have also been exploited for domestic online extremism. Earlier this month, for example, a teenager accused of carrying out a mass shooting at his Iowa high school reportedly maintained an active presence on Discord and, shortly before the rampage, commented in a channel dedicated to such attacks that he was 鈥済earing up鈥 for the mayhem. Just minutes before the shooting, the suspect appeared to capture a video inside a school bathroom and uploaded it to TikTok. 

Josh Golin, the executive director of Fairplay, a nonprofit devoted to bolstering online child protections, blasted the tech executives鈥 testimony for being little more than 鈥渆vasions and deflections.鈥 

鈥淚f Congress really cares about the families who packed the hearing today holding pictures of their children lost to social media harms, they will move the Kids Online Safety Act,鈥 Golin said in a statement. 鈥淧ointed questions and sound bites won鈥檛 save lives, but KOSA will.鈥 

The safety act, known as KOSA, has faced pushback from civil rights advocates on First Amendment grounds, arguing the proposal could be used to censor certain content and . Sen. Marsha Blackburn, a Republican from Tennessee and KOSA co-author, said last fall the rules are important to protect 鈥渕inor children from the transgender in this culture鈥 and cited the legislation as a way to shield children from 鈥渂eing indoctrinated鈥 online. The Heritage Foundation, a conservative think tank, endorsed the legislation, that 鈥渒eeping trans content away from children is protecting kids.鈥 

Snap鈥檚 Evan Spiegel and X鈥檚 Linda Yaccarino both agreed to support the Kids Online Safety Act.

Aliya Bhatia, a policy analyst with the nonprofit Center for Democracy and Technology, said that although lawmakers made clear their intention to act, their directives could end up doing more harm than good. She said the platforms serve as 鈥減eer-to-peer learning and community networks鈥 where young people can access information about reproductive health and other important topics that they might not feel comfortable receiving from adults in their lives. 

鈥淚t鈥檚 clear that this is a really tricky issue, it鈥檚 really difficult for the government and companies to decide what is harmful for young people,鈥 Bhatia said. 鈥淲hat one young person finds helpful online, another might find harmful.鈥

South Carolina’s Sen. Lindsey Graham, the committee’s ranking Republican, said that social media companies can鈥檛 be trusted to keep kids safe online and that lawmakers have run out of patience.

鈥淚f you鈥檙e waiting on these guys to solve the problem,鈥 he said, 鈥渨e鈥檙e going to die waiting.鈥 

]]>
Teen Mental Health Crisis Pushes More School Districts to Sue Social Media Giants /article/teen-mental-health-crisis-pushes-more-school-districts-to-sue-social-media-giants/ Fri, 31 Mar 2023 12:30:00 +0000 /?post_type=article&p=706803 The teen mental health crisis has so taxed and alarmed school districts across the country that many are entering legal battles against the social media giants they say have helped cause it, including TikTok, Snap, Meta, YouTube and Google.

At least eleven school districts, one county, and one California county system that oversees 23 smaller districts have filed suits this year, representing roughly 469,000 students. 

Two others in Arizona are considering their own complaints, one superintendent told 社区黑料. Eleven districts in voted to pursue similar litigation, as did . Many others across the country are on the verge of doing the same, according to a lawyer representing a New Jersey district.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淪chools, states, and Americans across the country are rightly pushing back against Big Tech putting profits over kids鈥 safety online,鈥 Sen. Richard Blumenthal, co-sponsor of the , bipartisan Kids Online Safety Act, told 社区黑料. 鈥淭hese efforts, proliferated by harrowing stories from families amid a worsening youth mental health crisis, underscore the urgency for Congress to act.鈥 

Algorithms and platform design have 鈥渆xploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants鈥 social media platforms,鈥 Seattle Public Schools claimed in the first suit filed this January.

Districts in Washington, Oregon, Arizona, New Jersey, and , , as well as say tech companies intentionally , exacerbating depression, anxiety, tech addiction and self-harm, straining learning and district finances. 

But the legal fight, whether tried or settled, will not be easy, outside counsel and at least one district leader said. 

鈥淲e don’t think that this is a slam dunk case. We think it’s going to be an uphill battle. But our board and I believe that this is in the best interest of our students to do this,鈥 said Andi Fourlis, superintendent of Arizona鈥檚 largest district, Mesa Public Schools. 鈥淚t鈥檚 about making the case that we need to do better for our kids.鈥 

Just how badly Mesa鈥檚 teens are hurting is laid out in detail in court filings: More than a third are chronically absent, 3,500 more were involved in disciplinary incidents in 2021-22 than in 2019-20 and the district has seen a 鈥渟urge鈥 in suicidal ideation and anxiety. 

Buried in the 111-page lawsuit, a high school senior鈥檚 video essay illustrates the painful impacts of social media addiction: risky or self-destructive behavior, disconnection from friends.

Simultaneously, and lawmakers are proposing bills to make platforms safer. Senate are underway, featuring parents whose children died by suicide. TikTok鈥檚 CEO this month to address concerns about exposure to harmful content. President Joe Biden flagged 鈥,鈥 in his last State of the Union Address.

Both legislative and legal efforts are after similar goals: changing the algorithms and product design believed to be hurting and kids. Through lawsuits, districts also seek financial compensation for the increased mental health services and training they鈥檝e 鈥溾 to establish. 

鈥淭he harms caused by social media companies have impacted the districts鈥 ability to carry out their core mission of providing education. The expenditures are not sustainable and divert resources from classroom instruction and other programs,鈥 said Michael Innes, partner with Carella Byrne, Cecchi, Olstein, Brody & Agnello, a firm representing New Jersey schools.

Previous complaints against opioid and e-cigarette companies, which levied public nuisance and negligence claims as districts鈥 social media filings do, resulted in multimillion dollar settlements. 

But some legal experts say there鈥檚 a key distinction in this case: Big Tech companies aren鈥檛 the ones producing content on these platforms, individuals are. Companies have some hefty . 

鈥淪chool districts are not in the business of suing people 鈥 the threshold for initiating litigation is very high,鈥 said Dean Kawamoto, a lawyer for Keller Rohrback, the Seattle-based firm representing four districts, and thousands of others in Juul litigation. 

鈥淚 do think it says something that you’ve got a group of schools that have filed now, and I think more are going to join them,鈥 Kawamoto added. 

Some outside counsel are . 

鈥淚 think there are questions about whether the litigation system is even a coherent way to go about this,鈥 First Amendment scholar and Harvard Law professor Rebecca Tushnet told 社区黑料. 鈥淚t’s very hard to use individual litigation to get systemic change, excepting in particular circumstances.鈥 

The exceptions, she added, have clear visions and specific outcomes, like requiring a doctor on-call for safer prison conditions. Those kinds of metrics are difficult to name when it comes to algorithms and mental health. 

What precedent (or lack thereof) tells us

Social media companies鈥 lawyers are likely to assert free speech protections early and often, including in initial motions to dismiss.

鈥淭he conventional wisdom is that if motions to dismiss are denied in cases like this, [companies] are much more likely to settle 鈥 reality is actually a little more mixed,鈥 Tushnet said, adding if the claims come after business models, companies fight harder. 

An added challenge is proving causal harm 鈥 that social media companies have caused student depression, anxiety, eating disorders or self-harm. The link is one that neuroscientists and researchers are , though experts say there鈥檚 an urgent need. 

鈥淭his is a watershed moment where schools can really roll up their sleeves and do something because 鈥 not that they haven’t been in the past 鈥 but because it’s so obvious. It’s right in front of them. It’s impacting students鈥 education,鈥 said Jerry Barone, chief clinical officer at Effective School Solutions, which brings mental health care to schools. 

About 13.5% of teen girls say Instagram makes thoughts of suicide worse; 17% of teen girls say it makes eating disorders worse, according to Meta鈥檚 leaked internal research, first revealed in a via .

Even if districts are able to provide proof, they may not ever see a judgment made. 

Public nuisance claims in tobacco and opioid mass torts were more successful in 鈥渋nducing settlements, rather than in courthouse outcomes,鈥 according to Robert Rabin, tort expert and professor at Stanford University. 

While he鈥檚 not 鈥渄ismissive鈥 of districts鈥 efforts, 鈥渢he precedents don鈥檛 supply clear-cut support for the claims here.鈥濃

The interim

As lawyers work out the details, students are left in the balance. Some are skeptical the suits will amount to anything at all, at least in their adolescence. 

鈥淲hy do you guys waste so much time on these useless things that you know get nowhere, when you can do it with things that you know will get somewhere?鈥 said Angela Ituarte, a sophomore at a Seattle high school. 

Many young people interviewed by 社区黑料 described their social media use like a double-edged sword: affirming, a place where they learned about mental health or found community, particularly for queer students of color; and simultaneously dangerous, a place where they connected with adults when they were 14 and saw dangerous diets promoted.

Social media, Ituarte said, makes it seem like self-harm and disordered eating, 鈥渁re the solution to everything. And it’s hard to get that out of those algorithms 鈥 even if you block the accounts or say you’re not interested it still keeps popping up. Usually it’s when things are bad, too.鈥

In a late February letter to senators, Meta touted a promising initiative to on one for extended periods. Only 1 in 5 teens actually moved to a new topic during a weeklong trial. 

To curb cyberbullying, users now get warnings for potentially offensive comments. People only edit or delete their message 50% of the time, according to the company鈥檚 responses to Senate inquiries. 

Meta, YouTube and Google did not respond to requests for comment. TikTok told 社区黑料 they cannot comment on ongoing litigation. The company has just started requiring users who say they are under 18 to enter a password after scrolling for an hour.

In a statement to 社区黑料, Snap said they 鈥渁re constantly evaluating how we continue to make our platform safer.鈥 Snap has partnered with mental health organizations to launch an in-app support system for users who may be experiencing a crisis, and acknowledged that the work may never be done. 

The process has only just begun. If the suits move to trial, some districts will be chosen as bellwethers to represent the many plaintiffs, tasked with regularly contributing to a lengthy trial. 

Still, there鈥檚 no doubt in Fourlis鈥檚 mind. 

鈥淪ometimes you have to be the first to step forward to take a bold leap so that others can follow,鈥 she said. 鈥淏eing the superintendent of the largest school district in Arizona, what we do often sets precedents, and I have to be very strategic about that responsibility.鈥

Disclosure: Campbell Brown, Meta鈥檚 vice president of media partnerships, is a co-founder and member of the board of directors of 社区黑料. She played no role in the editing of this article.

]]>
Opinion: 5 Challenges of Doing College in the聽Metaverse /article/5-challenges-of-doing-college-in-the-metaverse/ Thu, 15 Sep 2022 17:00:00 +0000 /?post_type=article&p=696529 This article was originally published in

More and more colleges are becoming 鈥,鈥 taking their physical campuses into a virtual online world, often called the 鈥渕etaverse.鈥 One initiative has working with Meta, the parent company of Facebook, and virtual reality company VictoryXR to create 3D online replicas 鈥 sometimes called 鈥溾 鈥 of their campuses that are updated live as people and items move through the real-world spaces.

Some classes are . And VictoryXR says that by 2023, it plans to , which allow for a group setting with live instructors and real-time class interactions.

One metaversity builder, New Mexico State University, says it wants to offer degrees in which students can take all their classes in virtual reality, .


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


There are many , such as 3D visual learning, more realistic interactivity and easier access for faraway students. But there are also potential problems. My recent has focused on aspects of the metaverse and risks such as . I see five challenges:

1. Significant costs and time

The metaverse . For instance, building a cadaver laboratory costs and maintenance. A virtual cadaver lab has made scientific .

However, licenses for virtual reality content, construction of digital twin campuses, virtual reality headsets and other investment expenses do .

A metaverse course license can cost universities . VictoryXR also charges a per student to access its metaverse.

Additional costs are incurred for virtual reality headsets. While Meta is providing a for metaversities launched by Meta and VictoryXR, that鈥檚 only a few of what may be needed. The low-end 128GB version of the Meta Quest 2 . Managing and maintaining a large number of headsets, , involves additional operational costs and time.

Colleges also need to spend significant time and resources to . Even more time will be required to deliver metaverse courses, many of which will need .

Most educators don鈥檛 have the , which can involve merging videos, still images and audio with text and interactivity elements into an .

2. Data privacy, security and safety concerns

Business models of companies developing metaverse technologies . For instance, people who want to use Meta鈥檚 Oculus Quest 2 virtual reality headsets must have Facebook accounts.

The headsets can collect highly personal and sensitive data . Meta has that advertisers might have to it.

Meta is also working on a high-end virtual reality headset called , with more advanced capabilities. Sensors in the device will allow a virtual avatar to maintain eye contact and make facial expressions that mirror the user鈥檚 eye movements and face. That data information and target them with personalized advertising.

Professors and students may not freely participate in class discussions if they know that all their moves, their speech and even their facial expressions are .

The virtual environment and its equipment can also collect a wide range of user data, such as , and even signals of emotions.

Cyberattacks in the metaverse could even cause physical harm. Metaverse interfaces , so they effectively trick the user鈥檚 brain into believing the user is in a different environment. can influence the activities of immersed users, even inducing them to , such as to the top of a staircase.

The metaverse can also . For instance, Roblox has launched to bring 3D, interactive, virtual environments into physical and online classrooms. Roblox says it has , but no protections are perfect, and its metaverse involves user-generated content and a chat feature, which could be or people or other .

3. Lack of rural access to advanced infrastructure

Many metaverse applications such as . They require high-speed data networks to handle all of the across the virtual and physical space.

Many users, especially in rural areas, . For instance, 97% of the population living in urban areas in the U.S. has in tribal lands.

4. Adapting challenges to a new environment

Building and launching a metaversity requires drastic changes in a school鈥檚 approach to and learning.
For instance, metaverse but active participants in virtual reality games and other activities.

The combination of advanced technologies such as can create personalized learning experiences that are not in real time but still experienced through the metaverse. Automatic systems that tailor the content and pace of learning to the ability and interest of the student can make learning in the metaverse , with fewer set rules.

Those differences require significant , such as quizzes and tests. Traditional measures such as individualized and unstructured learning experiences offered by the metaverse.

5. Amplifying biases

Gender, racial and ideological biases are common in textbooks of and , which influence how students understand certain events and topics. In some cases, those biases prevent the achievement of justice and other goals, such as .

Biases鈥 effects can be even more powerful in rich media environments. are at views than textbooks. has the potential to be .

To maximize the benefits of the metaverse for teaching and learning, universities 鈥 and their students 鈥 will have to wrestle with protecting users鈥 privacy, training teachers and the level of national investment in broadband networks.The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
Meet the Gatekeepers of Students鈥 Private Lives /article/meet-the-gatekeepers-of-students-private-lives/ Mon, 02 May 2022 11:15:00 +0000 /?post_type=article&p=588567 If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Megan Waskiewicz used to sit at the top of the bleachers, rest her back against the wall and hide her face behind the glow of a laptop monitor. While watching one of her five children play basketball on the court below, she knew she had to be careful. 

The mother from Pittsburgh didn鈥檛 want other parents in the crowd to know she was also looking at child porn.

Waskiewicz worked as a content moderator for Gaggle, a surveillance company that monitors the online behaviors of some 5 million students across the U.S. on their school-issued Google and Microsoft accounts. Through an algorithm designed to flag references to sex, drugs, and violence and a team of content moderators like Waskiewicz, the company sifts through billions of students鈥 emails, chat messages and homework assignments each year. Their work is supposed to ferret out evidence of potential self-harm, threats or bullying, incidents that would prompt Gaggle to notify school leaders and, .

As a result, kids鈥 deepest secrets 鈥 like nude selfies and suicide notes 鈥 regularly flashed onto Waskiewicz鈥檚 screen. Though she felt 鈥渁 little bit like a voyeur,鈥 she believed Gaggle helped protect kids. But mostly, the low pay, the fight for decent hours, inconsistent instructions and stiff performance quotas left her feeling burned out. Gaggle鈥檚 moderators face pressure to review 300 incidents per hour and Waskiewicz knew she could get fired on a moment鈥檚 notice if she failed to distinguish mundane chatter from potential safety threats in a matter of seconds. She lasted about a year.

鈥淚n all honesty I was sort of half-assing it,鈥 Waskiewicz admitted in an interview with 社区黑料. 鈥淚t wasn鈥檛 enough money and you鈥檙e really stuck there staring at the computer reading and just click, click, click, click.”

Content moderators like Waskiewicz, hundreds of whom are paid just $10 an hour on month-to-month contracts, are on the front lines of a company that claims it saved the lives of 1,400 students last school year and argues that the growing mental health crisis makes its presence in students鈥 private affairs essential. Gaggle founder and CEO Jeff Patterson has warned about 鈥渁 tsunami of youth suicide headed our way鈥 and said that schools have 鈥渁 moral obligation to protect the kids on their digital playground.鈥 

Eight former content moderators at Gaggle shared their experiences for this story. While several believed their efforts in some cases did shield kids from serious harm, they also surfaced significant questions about the company鈥檚 efficacy, its employment practices and its effect on students鈥 civil rights.

Among the moderators who worked on a contractual basis, none had prior experience in school safety, security or mental health. Instead, their employment histories included retail work and customer service, but they were drawn to Gaggle while searching for remote jobs that promised flexible hours. 

They described an impersonal and cursory hiring process that appeared automated. Former moderators reported submitting applications online and never having interviews with Gaggle managers 鈥 either in-person, on the phone or over Zoom 鈥 before landing jobs.

Once hired, moderators reported insufficient safeguards to protect students鈥 sensitive data, a work culture that prioritized speed over quality, scheduling issues that sent them scrambling to get hours and frequent exposure to explicit content that left some traumatized. Contractors lacked benefits including mental health care and one former moderator said he quit after repeated exposure to explicit material that so disturbed him he couldn鈥檛 sleep and without 鈥渁ny money to show for what I was putting up with.鈥

Gaggle content moderators encompass as many as 600 contractors at any given time and just two dozen work as employees who have access to benefits and on-the-job training that lasts several weeks. Gaggle executives have sought to downplay contractors鈥 role with the company, arguing they use 鈥渃ommon sense鈥 to distinguish false flags generated by the algorithm from potential threats and do 鈥渘ot require substantial training.鈥 

While the experiences reported by Gaggle鈥檚 moderator team platforms like Meta-owned Facebook, Patterson said his company relies on 鈥淯.S.-based, U.S.-cultured reviewers as opposed to outsourcing that work to India or Mexico or the Philippines,鈥 as . He rebuffed former moderators who said they lacked sufficient time to consider the severity of a particular item.

鈥淪ome people are not fast decision-makers. They need to take more time to process things and maybe they鈥檙e not right for that job,鈥 he told 社区黑料. 鈥淔or some people, it鈥檚 no problem at all. For others, their brains don鈥檛 process that quickly.鈥

Executives also sought to minimize the contractors鈥 access to students鈥 personal information; a spokeswoman said they only see 鈥渟mall snippets of text鈥 and lacked access to what鈥檚 known as students鈥 鈥減ersonally identifiable information.鈥 Yet former contractors described reading lengthy chat logs, seeing nude photographs and, in some cases, coming upon students鈥 names. Several former moderators said they struggled to determine whether something should be escalated as harmful due to 鈥済ray areas,鈥 such as whether a Victoria鈥檚 Secret lingerie ad would be considered acceptable or not. 

鈥淭hose people are really just the very, very first pass,鈥 Gaggle spokeswoman Paget Hetherington said. 鈥淚t doesn鈥檛 really need training, it鈥檚 just like if there鈥檚 any possible doubt with that particular word or phrase it gets passed on.鈥 

Molly McElligott, a former content moderator and customer service representative, said management was laser focused on performance metrics, appearing more interested in business growth and profit than protecting kids. 

鈥淚 went into the experience extremely excited to help children in need,鈥 McElligott wrote in an email. Unlike the contractors, McElligott was an employee at Gaggle, where she worked for five months in 2021 before taking a position at the Manhattan District Attorney’s Office in New York. 鈥淚 realized that was not the primary focus of the company.”

Gaggle is part of a burgeoning campus security industry that鈥檚 seen significant business growth in the wake of mass school shootings as leaders scramble to prevent future attacks. Patterson, who founded the company in 1999 by that could be monitored for , said its focus now is mitigating the .

Patterson said the team talks about 鈥渓ives saved鈥 and child safety incidents at every meeting, and they are open about sharing the company鈥檚 financial outlook so that employees 鈥渃an have confidence in the security of their jobs.鈥

Content moderators work at a Facebook office in Austin, Texas. Unlike the social media giant, Gaggle鈥檚 content moderators work remotely. (Ilana Panich-Linsman / Getty Images)

鈥榃e are just expendable鈥

Under the pressure of new federal scrutiny along with three other companies that monitor students online, it relies on a 鈥渉ighly trained content review team鈥 to analyze student materials and flag safety threats. Yet former contractors, who make up the bulk of Gaggle鈥檚 content review team, described their training as 鈥渁 joke,鈥 consisting of a slideshow and an online quiz, that left them ill-equipped to complete a job with such serious consequences for students and schools.

As an employee on the company鈥檚 safety team, McElligott said she underwent two weeks of training but the disorganized instruction meant her and other moderators were 鈥渕ore confused than when we started.鈥

Former content moderators have also flocked to employment websites like Indeed.com to warn job seekers about their experiences with the company, often sharing reviews that resembled the former moderators鈥 feedback to 社区黑料.

鈥淚f you want to be not cared about, not valued and be completely stressed/traumatized on a daily basis this is totally the job for you,鈥 one on Indeed. 鈥淲arning, you will see awful awful things. No they don鈥檛 provide therapy or any kind of support either.

鈥淭hat isn鈥檛 even the worst part,鈥 the reviewer continued. 鈥淭he worst part is that the company does not care that you hold them on your backs. Without safety reps they wouldn鈥檛 be able to function, but we are just expendable.鈥 

As the first layer of Gaggle鈥檚 human review team, contractors analyze materials flagged by the algorithm and decide whether to escalate students鈥 communications for additional consideration. Designated employees on Gaggle鈥檚 Safety Team are in charge of calling or emailing school officials to notify them of troubling material identified in students鈥 files, Patterson said.

Gaggle鈥檚 staunchest critics have questioned the tool鈥檚 efficacy and describe it as a student privacy nightmare. In March, Democratic Sens. Elizabeth Warren and Ed Markey and similar companies to protect students鈥 civil rights and privacy. In a report, the senators said the tools could surveil students inappropriately, compound racial disparities in school discipline and waste tax dollars.

The information shared by the former Gaggle moderators with 社区黑料 鈥渟truck me as the worst-case scenario,鈥 said attorney Amelia Vance, the co-founder and president of Public Interest Privacy Consulting. Content moderators鈥 limited training and vetting, as well as their lack of backgrounds in youth mental health, she said, 鈥渋s not acceptable.鈥

In to lawmakers, Gaggle described a two-tiered review procedure but didn鈥檛 disclose that low-wage contractors were the first line of defense. CEO Patterson told 社区黑料 they 鈥渄idn鈥檛 have nearly enough time鈥 to respond to lawmakers鈥 questions about their business practices and didn鈥檛 want to divulge proprietary information. Gaggle uses a third party to conduct criminal background checks on contractors, Patterson said, but he acknowledged they aren鈥檛 interviewed before getting placed on the job.

鈥淭here鈥檚 a lot of contractors. We can鈥檛 do a physical interview of everyone and I don鈥檛 know if that’s appropriate,鈥 he said. 鈥淚t might actually introduce another set of biases in terms of who we hire or who we don鈥檛 hire.”

鈥極ther eyes were seeing it鈥

In a previous investigation, 社区黑料 analyzed a cache of public records to expose how Gaggle鈥檚 algorithm and content moderators subject students to relentless digital surveillance long after classes end for the day, extending schools鈥 authority far beyond their traditional powers to regulate speech and behavior, including at home. Gaggle鈥檚 algorithm relies largely on keyword matching and gives content moderators a broad snapshot of students鈥 online activities including diary entries, classroom assignments and casual conversations between students and their friends. 

After the pandemic shuttered schools and shuffled students into remote learning, Gaggle oversaw a surge in students鈥 online materials and of school districts interested in their services. Gaggle as educators scrambled to keep a watchful eye on students whose chatter with peers moved from school hallways to instant messaging platforms like Google Hangouts. One year into the pandemic, Gaggle in references to suicide and self-harm, accounting for more than 40% of all flagged incidents. 

Waskiewicz, who began working for Gaggle in January 2020, said that remote learning spurred an immediate shift in students鈥 online behaviors. Under lockdown, students without computers at home began using school devices for personal conversations. Sifting through the everyday exchanges between students and their friends, Waskiewicz said, became a time suck and left her questioning her own principles. 

鈥淚 felt kind of bad because the kids didn鈥檛 have the ability to have stuff of their own and I wondered if they realized that it was public,鈥 she said. 鈥淚 just wonder if they realized that other eyes were seeing it other than them and their little friends.鈥

Student activity monitoring software like Gaggle has become ubiquitous in U.S. schools, and 81% of teachers work in schools that use tools to track students鈥 computer activity, according to a recent survey by the nonprofit Center for Democracy and Technology. A majority of teachers said the benefits of using such tools, which can block obscene material and monitor students鈥 screens in real time, outweigh potential risks.

Likewise, students generally recognize that their online activities on school-issued devices are being observed, the survey found, and alter their behaviors as a result. More than half of student respondents said they don鈥檛 share their true thoughts or ideas online as a result of school surveillance and 80% said they were more careful about what they search online. 

A majority of parents reported that the benefits of keeping tabs on their children鈥檚 activity exceeded the risks. Yet they may not have a full grasp on how programs like Gaggle work, including the heavy reliance on untrained contractors and weak privacy controls revealed by 社区黑料鈥檚 reporting, said Elizabeth Laird, the group鈥檚 director of equity in civic technology. 

鈥淚 don鈥檛 know that the way this information is being handled actually would meet parents鈥 expectations,鈥 Laird said. 

Another former contractor, who reached out to 社区黑料 to share his experiences with the company anonymously, became a Gaggle moderator at the height of the pandemic. As COVID-19 cases grew, he said he felt unsafe continuing his previous job as a caregiver for people with disabilities so he applied to Gaggle because it offered remote work. 

About a week after he submitted an application, Gaggle gave him a key to kids鈥 private lives 鈥 including, most alarming to him, their nude selfies. Exposure to such content was traumatizing, the former moderator said, and while the job took a toll on his mental well-being, it didn鈥檛 come with health insurance. 

鈥淚 went to a mental hospital in high school due to some hereditary mental health issues and seeing some of these kids going through similar things really broke my heart,鈥 said the former contractor, who shared his experiences on the condition of anonymity, saying he feared possible retaliation by the company. 鈥淚t broke my heart that they had to go through these revelations about themselves in a context where they can鈥檛 even go to school and get out of the house a little bit. They have to do everything from home 鈥 and they鈥檙e being constantly monitored.鈥 

In this screenshot, Gaggle explains its terms and conditions for contract content moderators. The screenshot, which was provided to 社区黑料 by a former contractor who asked to remain anonymous, has been redacted.

Gaggle employees are offered benefits, including health insurance, and can attend group therapy sessions twice per month, Hetherington said. Patterson acknowledged the job can take a toll on staff moderators, but sought to downplay its effects on contractors and said they鈥檙e warned about exposure to disturbing content during the application process. He said using contractors allows Gaggle to offer the service at a price school districts can afford. 

鈥淨uite honestly, we鈥檙e dealing with school districts with very limited budgets,鈥 Patterson said. 鈥淭here have to be some tradeoffs.鈥 

The anonymous contractor said he wasn鈥檛 as concerned about his own well-being as he was about the welfare of the students under the company鈥檚 watch. The company lacked adequate safeguards to protect students鈥 sensitive information from leaking outside the digital environment that Gaggle built for moderators to review such materials. Contract moderators work remotely with limited supervision or oversight, and he became especially concerned about how the company handled students鈥 nude images, which are reported to school districts and the . Nudity and sexual content accounted for about 17% of emergency phone calls and email alerts to school officials last school year, . 

Contractors, he said, could easily save the images for themselves or share them on the dark web. 

Patterson acknowledged the possibility but said he wasn鈥檛 aware of any data breaches. 

鈥淲e do things in the interface to try to disable the ability to save those things,鈥 Patterson said, but 鈥測ou know, human beings who want to get around things can.鈥

鈥楳ade me feel like the day was worth it鈥

Vara Heyman was looking for a career change. After working jobs in retail and customer service, she made the pivot to content moderation and a contract position with Gaggle was her first foot in the door. She was left feeling baffled by the impersonal hiring process, especially given the high stakes for students. 

Waskiewicz had a similar experience. In fact, she said the only time she ever interacted with a Gaggle supervisor was when she was instructed to provide her bank account information for direct deposit. The interaction left her questioning whether the company that contracts with more than 1,500 school districts was legitimate or a scam. 

鈥淚t was a little weird when they were asking for the banking information, like 鈥榃ait a minute is this real or what?鈥欌 Waskiewicz said. 鈥淚 Googled them and I think they鈥檙e pretty big.鈥

Heyman said that sense of disconnect continued after being hired, with communications between contractors and their supervisors limited to a Slack channel. 

Despite the challenges, several former moderators believe their efforts kept kids safe from harm. McElligott, the former Gaggle safety team employee, recalled an occasion when she found a student鈥檚 suicide note. 

鈥淜nowing I was able to help with that made me feel like the day was worth it,鈥 she said. 鈥淗earing from the school employees that we were able to alert about self-harm or suicidal tendencies from a student they would never expect to be suffering was also very rewarding. It meant that extra attention should or could be given to the student in a time of need.鈥 

Susan Enfield, the superintendent of Highline Public Schools in suburban Seattle, said her district鈥檚 contract with Gaggle has saved lives. Earlier this year, for example, the company detected a student鈥檚 suicide note early in the morning, allowing school officials to spring into action. The district uses Gaggle to keep kids safe, she said, but acknowledged it can be a disciplinary tool if students violate the district鈥檚 code of conduct. 

鈥淣o tool is perfect, every organization has room to improve, I鈥檓 sure you could find plenty of my former employees here in Highline that would give you an earful about working here as well,鈥 said Enfield, one of 23 current or former superintendents from across the country who Gaggle cited as references in its letter to Congress. 

鈥淭here鈥檚 always going to be pros and cons to any organization, any service,鈥 Enfield told 社区黑料, 鈥渂ut our experience has been overwhelmingly positive.鈥

True safety threats were infrequent, former moderators said, and most of the content was mundane, in part because the company鈥檚 artificial intelligence lacked sophistication. They said the algorithm routinely flagged students鈥 papers on the novels To Kill a Mockingbird and The Catcher in the Rye. They also reported being inundated with spam emailed to students, acting as human spam filters for a task that鈥檚 long been automated in other contexts. 

Conor Scott, who worked as a contract moderator while in college, said that 鈥99% of the time鈥 Gaggle鈥檚 algorithm flagged pedestrian materials including pictures of sunsets and student鈥檚 essays about World War II. Valid safety concerns, including references to violence and self-harm, were rare, Scott said. But he still believed the service had value and felt he was doing 鈥渢he right thing.鈥

McElligott said that managers鈥 personal opinions added another layer of complexity. Though moderators were 鈥渉eld to strict rules of right and wrong decisions,鈥 she said they were ultimately 鈥渂eing judged against our managers鈥 opinions of what is concerning and what is not.鈥 

鈥淚 was told once that I was being overdramatic when it came to a potential inappropriate relationship between a child and adult,鈥 she said. 鈥淭here was also an item that made me think of potential trafficking or child sexual abuse, as there were clear sexual plans to meet up 鈥 and when I alerted it, I was told it was not as serious as I thought.鈥 

Patterson acknowledged that gray areas exist and that human discretion is a factor in deciding what materials are ultimately elevated to school leaders. But such materials, he said, are not the most urgent safety issues. He said their algorithm errs on the side of caution and flags harmless content because district leaders are 鈥渟o concerned about students.鈥 

The former moderator who spoke anonymously said he grew alarmed by the sheer volume of mundane student materials that were captured by Gaggle鈥檚 surveillance dragnet, and pressure to work quickly didn鈥檛 offer enough time to evaluate long chat logs between students having 鈥渉eartfelt and sensitive鈥 conversations. On the other hand, run-of-the-mill chatter offered him a little wiggle room. 

鈥淲hen I would see stuff like that I was like 鈥極h, thank God, I can just get this out of the way and heighten how many items per hour I鈥檓 getting,鈥欌 he said. 鈥淚t鈥檚 like 鈥業 hope I get more of those because then I can maybe spend a little more time actually paying attention to the ones that need it.鈥欌 

Ultimately, he said he was unprepared for such extensive access to students鈥 private lives. Because Gaggle鈥檚 algorithm flags keywords like 鈥済ay鈥 and 鈥渓esbian,鈥 for example, it alerted him to students exploring their sexuality online. Hetherington, the Gaggle spokeswoman, said such keywords are included in its dictionary to 鈥渆nsure that these vulnerable students are not being harassed or suffering additional hardships,鈥 but critics have accused the company of subjecting LGBTQ students to disproportionate surveillance. 

鈥淚 thought it would just be stopping school shootings or reducing cyberbullying but no, I read the chat logs of kids coming out to their friends,鈥 the former moderator said. 鈥淚 felt tremendous power was being put in my hands鈥 to distinguish students鈥 benign conversations from real danger, 鈥渁nd I was given that power immediately for $10 an hour.鈥 

Minneapolis student Teeth Logsdon-Wallace, who posed for this photo with his dog Gilly, used a classroom assignment to discuss a previous suicide attempt and explained how his mental health had since improved. He became upset after Gaggle flagged his assignment. (Photo courtesy Alexis Logsdon)

A privacy issue

For years, student privacy advocates and civil rights groups have warned about the potential harms of Gaggle and similar surveillance companies. Fourteen-year-old Teeth Logsdon-Wallace, a Minneapolis high school student, fell under Gaggle鈥檚 watchful eye during the pandemic. Last September, he used a class assignment to write about a previous suicide attempt and explained how music helped him cope after being hospitalized. Gaggle flagged the assignment to a school counselor, a move the teen called a privacy violation. 

He said it鈥檚 鈥渏ust really freaky鈥 that moderators can review students鈥 sensitive materials in public places like at basketball games, but ultimately felt bad for the contractors on Gaggle鈥檚 content review team. 

鈥淣ot only is it violating the privacy rights of students, which is bad for our mental health, it鈥檚 traumatizing these moderators, which is bad for their mental health,鈥 he said. Relying on low-wage workers with high turnover, limited training and without backgrounds in mental health, he said, can have consequences for students. 

鈥淏ad labor conditions don鈥檛 just affect the workers,鈥 he said. 鈥淚t affects the people they say they are helping.鈥 

Gaggle cannot prohibit contractors from reviewing students鈥 private communications in public settings, Heather Durkac, the senior vice president of operations, said in a statement. 

鈥淗owever, the contractors know the nature of the content they will be reviewing,鈥 Durkac said. 鈥淚t is their responsibility and part of their presumed good and reasonable work ethic to not be conducting these content reviews in a public place.鈥 

Gaggle鈥檚 former contractors also weighed students’ privacy rights. Heyman said she 鈥渨ent back and forth鈥 on those implications for several days before applying to the job. She ultimately decided that Gaggle was acceptable since it is limited to school-issued technology. 

鈥淚f you don鈥檛 want your stuff looked at, you can use Hotmail, you can use Gmail, you can use Yahoo, you can use whatever else is out there,鈥 she said. 鈥淎s long as they鈥檙e being told and their parents are being told that their stuff is going to be monitored, I feel like that is OK.鈥 

Logsdon-Wallace and his mother said they didn鈥檛 know Gaggle existed until his classroom assignment got flagged to a school counselor. 

Meanwhile, the anonymous contractor said that chat conversations between students that got picked up by Gaggle鈥檚 algorithm helped him understand the effects that surveillance can have on young people. 

鈥淪ometimes a kid would use a curse word and another kid would be like, 鈥楧ude, shut up, you know they鈥檙e watching these things,鈥欌 he said. 鈥淭hese kids know that they鈥檙e being looked in on,鈥 even if they don鈥檛 realize their observer is a contractor working from the couch in his living room. 鈥淎nd to be the one that is doing that 鈥 that is basically fulfilling what these kids are paranoid about 鈥 it just felt awful.鈥 

If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Disclosure: Campbell Brown is the head of news partnerships at Facebook. Brown co-founded 社区黑料 and sits on its board of directors.

]]>