The internet is about to get a lot safer

dsa3a.jpeg?resize=1200,600

This article is from The Technocrat, MIT Technology Review’s weekly technical policy newsletter about power, politics and Silicon Valley. To receive it in your inbox every Friday, sign up here.

If you use Google, Instagram, Wikipedia, or YouTube, you’re going to notice changes to content moderation, transparency, and safety features on those sites over the next six months.

Why? It’s due to some major tech legislation passed in the EU last year but not enough attention (IMO), especially in the US. I’m referring to a pair of bills called the Digital Services Act (DSA) and the Digital Markets Act (DMA), and this is your sign, as they say, to get used to it.

The acts are quite revolutionary indeed, setting the global gold standard for technology regulation of user-generated content. The DSA deals with digital safety and transparency from technology companies, while the DMA addresses antitrust and competition in the industry. Let me explain.

A few weeks ago, the DSA reached a major milestone. By 17 February 2023, all major technology platforms in Europe were required to report their own size, which was used to group the companies into different tiers. The largest companies are called “Very Large Online Platforms” (or VLOPs) or “Very Large Online Search Engines” (or VLOSEs), with over 45 million monthly active users in the EU (or about 10 % of the EU population) and will adhere to the highest standards of transparency and regulation. The smaller online platforms have far fewer obligations, which was part of a policy designed to encourage competition and innovation while holding Big Tech accountable.

“If you ask [small companies]for example, to hire 30,000 moderators, you will kill the small companies,” Henri Verdier, the French ambassador for digital affairs, told me last year.

So what will the DSA actually do? To date, at least 18 companies have declared that they qualify as VLOPs and VLOSEs, including most of the well-known players such as YouTube, TikTok, Instagram, Pinterest, Google, and Snapchat. (If you want a full list, London School of Economics law professor Martin Husovec has a big Google doc that shows where all the big players are shaking and has written an explainer.)

The DSA will require those companies to assess risks on their platforms, such as the likelihood of illegal content or electoral manipulation, and make plans to mitigate those risks with independent audits to verify safety. Smaller companies (those with fewer than 45 million users) will have to meet new content moderation standards which include removing illegal content “rapidly” when flagged, notifying users about removing that, and implementing existing company policies.

Supporters of the legislation say the bill will help end the era of self-regulation by technology companies. “I don’t want the companies to decide what is and isn’t prohibited with no separation of powers, no accountability, no reporting, no opportunity to compete,” says Verdier. “It’s very dangerous.”

That said, the bill makes it clear that platforms are not liable for illegal user-generated content, unless they are aware of the content and do not remove it.

Perhaps most importantly, the DSA requires companies to significantly increase transparency, through reporting obligations regarding “terms of service” notices and regular audit reports about content moderation. Regulators hope this will have a far-reaching impact on public conversations about societal risks associated with major technology platforms such as hate speech, misinformation and violence.

What will you notice? You will be able to participate in and formally dispute content moderation decisions made by companies. The DSA will effectively ban shadow-banning (the practice of de-prioritizing content without notice), curb cyber-violence against women, and ban targeted advertising to users under the age of 18. account management work on the platforms, shedding new light on how the biggest tech companies operate. Historically, technology companies have been very reluctant to share platform data with the public or even with academic researchers.

What lies ahead? Now the European Commission (EC) will review the number of reported users, and it is time to protest or request more information from technology companies. One notable issue is the omission of porn sites from the “Very large” category, which Husovec called “shocking.” He told me he thinks the EC should challenge its reported user numbers.

Once the size groupings are confirmed, the largest companies will have until September 1, 2023 to comply with the regulations, while smaller companies will have until February 17, 2024. Many experts expect companies to roll out some of the changes to all users. , not just those living in the EU. With a Section 230 amendment out of sight in the US, many US users will benefit from a safer internet mandated overseas.

What else am I reading

More chaos, and layoffs, at Twitter.

  • Elon had another big news week after firing another 200 people, or 10% of Twitter’s remaining staff, over the weekend. These employees were likely part of the “hard core” cohort who agreed to abide by Musk’s aggressive working conditions.
  • NetBlocks has reported four major site breaches since the beginning of February.

Everyone is trying to make sense of the generative-AI hoopla.

  • The FTC issued a statement warning companies not to lie about the capabilities of their AIs. I also recommend reading this helpful piece from my colleague Melissa Heikkilä about how to use generative AI responsibly and this explainer about 10 legal and business risks of generative AI by Matthew Ferraro of Tech Policy Press.
  • The dangers of technology are already making the news. This reporter hacked into his bank account using an AI-generated voice.

2022 saw more internet shutdowns than ever before, continuing the trend of authoritarian censorship.

  • This week, Access Now published its annual report tracking blackouts around the world. India, once again, topped the list with the most shutdowns.
  • Last year, I spoke with Dan Keyserling, who worked on the 2021 report, to learn more about how a shutdown is weaponized. During our interview, he told me, “Internet shutdowns are becoming more frequent. More governments are experimenting with curtailing internet access as a tool to influence citizen behaviour. Arguably the costs of internet shutdowns are increasing as governments are becoming more sophisticated in how they approach this, but also, we’re living more of our lives online.”

What I learned this week

Data brokers are selling mental health data online, according to a new report from the Duke Cyber ​​Policy Program. The researcher asked 37 data brokers for mental health information, and 11 responded willingly. The report details how these select data brokers offered to sell information on depression, ADHD, and insomnia with few restrictions. Some of the data was linked to people’s names and addresses.

In an interview with PBS, Justin Sherman, the project’s leader, explained, “There are a range of companies that are not covered by our narrow health privacy regulations. And so they’re legally free to collect and even share and sell this kind of health data, which enables a range of companies that can’t reach the usual advertising companies, Big Pharma, even health insurance companies – the buy this data up. and do things like run ads, profile consumers, make potential health plan pricing decisions. And the data brokers enable these companies to circumvent health regulations.”

On March 3, the FTC announced a ban on the online mental health company BetterHelp from sharing people’s data with other companies.

Leave a Reply

Your email address will not be published. Required fields are marked *