The big issue with the SAFE TECH Act
And examples of how word choices can have long lasting impacts
Mention of Section 230 has been floated around plenty more in recent years and for good reason. It’s been central in shielding online companies, particularly big tech, from being liable for what its users post to their respective sites. This part of the Communications Decency Act has been key in contributing to the proliferation of user-generated content that we have seen in the last two decades.
So if we have seen big tech and online companies thrive, why are lawmakers focused on reforming Section 230? Where there have been a huge amount of positives in user-generated content and economy, many have argued that the stance of no accountability from big tech has enabled the spread of fraudulent information and harassment.
Honing in on the spread of disinformation via online ads, Senators Warner, Hirono, and Klobuchar introduced The SAFE TECH Act last week in hopes of creating better safeguards. However, the word choice around platforms accepting monies leaves the door open for abuse of the well-intended carve-out.
The marked-up version of the amendment is easier to parse, so we’ll use this as our reference point. Under (c) (1) (A), the text is to be rewritten with the following edits and additions in bold:
(A) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any speech provided by another information content provider, except to the extent the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.
In loosely referencing “payment,” the bill’s reach far extends beyond the online ads you see on Facebook and Twitter. Platforms, such as this one, inadvertently also become subject to this provision, not to mention your fav folks to support on Patreon, Bandcamp, and Etsy. Without defining the source of the payments or the function of these payments, even the below could be an actual possibility.
Only after many pointed out the fallacy in the deployment of such a vague term did Sen. Warner’s office respond with the acknowledgement that “[g]iven the potential implications that this would have on subscription services as observers have noted, Sen. Warner is currently reviewing and working to refine the language.”
Even though bills go through iterations, especially after debates, it should not give any lawmakers a license to be so reckless. After all, we’ve already seen the unintended consequences of Section 230 amendments through FOSTA-SESTA, making sex work considerably less safe. Hopefully, the next draft will take heed of what lawyers and legal scholars have had to say and be more precise in its scope and language.
Why Definitions Matter: Example #1
Take a look at the “Stop Internet Sexual Exploitation Act” (“SISEA”) that was drafted in late December. The terms and definitions used are so poorly framed in limits that the consequences are over reaching.
For example, one of the issues that I raised in this thread was how the term “covered platform”—the scope in question—was defined. The bill states it is “an online platform that hosts and makes available to the general public pornographic images”, the range of what constitutes as said platform is far reaching. Sure, Pornhub may be the target, but Patreon, OnlyFans, Reddit, and Twitter are also fair game at the very least by this term’s parameters.
The lack of definition of “general public” also raises questions on where the limits of reach given that being a subscriber on OnlyFans vs. accessing a semi-closed network on Snapchat vs. iMessages stored on iCloud vs. a readily available image on Reddit could all be construed as covered platforms.
Why Definitions Matter: Example #2
When CCP 1001 was enacted in 2019, it focused on providing an avenue for California workers to speak out about sex-based discrimination, superseding any non-disparagement and -disclosure agreements signed. What this bill did not provide, though, was a pathway for those wronged by other forms of discrimination to speak out without fear of retribution.
Last summer, Pinterest employee Ifeoma Ozoma publicly alleged that she faced discrimination due to her race and sex. Where CCP 1001 would only cover her sex discrimination claims, she was left vulnerable should there be a lawsuit for her violating her NDA, since discrimination of race was not covered in Section 1001’s original provisions.
In turn, she worked with the office of California State Senator Connie Levya to develop and introduce the Silence No More Act (SB 331), which was introduced earlier this week. This new bill expands on CCP 1001's scope to no longer limit itself to only “discrimination based on sex.” Instead, the scope is simplified to only read “discrimination” so as to cover all types.
An additional and important change is the addition of “or restricts” to the “prevents the disclosure of factual information” clause. The addition of prohibiting restrictions of disclosure is key in ensuring that claimants’ voices are not muted in any capacity and are able to openly discuss discrimination faced.
Even with a foundation as solid as CCP 1001, there are opportunities to build upon the preexisting law to better protect claimants. But in order to improve current and future laws, the ones proposed now must put their best foot forward in terms of precision and clarity.