Discussion:
Josh Hawley Is A Dirty Communist Ignorant Fuckheaded Retard
(too old to reply)
Michael Zimmerman
2023-06-17 02:19:32 UTC
Permalink
Hawley, Blumenthal Team Up To Push Nonsensical AI/230 Bill

Policy

from the did-an-ai-write-this? dept
Thu, Jun 15th 2023 12:05pm - Mike Masnick
There are some questions about whether or not Section 230 protects AI
companies from being liable for the output from their generative AI
tools. Matt Perrault published a thought-provoking piece arguing that
230 probably does not protect generative AI companies. Jess Miers,
writing here at Techdirt, argued the opposite point of view (which I
found convincing). Somewhat surprisingly, Senator Ron Wyden and former
Rep. Chris Cox, the authors of 230 have agreed with Perrault’s argument.

The Wyden/Cox (Perrault) argument is summed up in this quote from Cox:

“To be entitled to immunity, a provider of an interactive computer
service must not have contributed to the creation or development of the
content at issue,” he told me. “So when ChatGPT creates content that is
later challenged as illegal, Section 230 will not be a defense.”

At a first pass, that may sound compelling. But, as Miers noted in her
piece, the details get a lot trickier once you start looking at them. As
she points out, it’s already well established that 230 protects
algorithmic curation and promotion (this was sorta, partly, at issue in
the Gonzalez case, though by the time the Supreme Court heard the case,
it was mostly dropped, in part because the lawyers backing Gonzalez
realized that their initial argument probably would make search engines
illegal).

Further, Miers notes, that 230 cases have already been found to protect
algorithmically generated snippets that summarize content elsewhere,
even though those are “created” by Google, based on (1) the search input
“prompt” from the user, and (2) the giant database of content that
Google has scanned.

And, that’s where the issue really gets tricky, and where those
insisting that generative AI companies are clearly outside the scope of
230 feel like they haven’t quite thought through all of this: where is
the line that you can draw between these two things? At what point do we
go from one tool, Google, that scraped a bunch of content and creates a
summary in response to input, to another tool, AI, that scrapes a bunch
of content and creates “whatever” in response to input?

Well, the two Senators who hate the internet more than anyone else, the
bipartisan “destroy the internet, and who cares what damage it does”
buddies: Senator Richard Blumenthal and insurrectionist supporting
Senator Josh Hawley have teamed up to introduce a bill that explicitly
says AI companies get no 230 protection. Leaving aside the question of
why any Democrat would be willing to team up with Hawley on literally
anything at this moment, this bill is… well… weird.

First, just the fact that they had to write this bill suggests (perhaps
surprisingly?) that Hawley and Blumenthal agree with Miers more than
they agree with Wyden, Cox, or Perrault. If 230 didn’t apply to AI
companies, why would they need to write this bill?

But, if you look at the text of the bill, you quickly realize that
Hawley and Blumenthal (this part is not surprising) have no clue how to
draft a bill that wouldn’t suck in a ton of other services, and strip
them of 230 protections (perhaps that’s their real goal, as both have
tried to destroy Section 230 going back many years).

The definition of “Generative Artificial Intelligence” is, well, a problem:

GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial
intelligence’ means an artificial intelligence system that is capable of
generating novel text, video, images, audio, and other media based on
prompts or other forms of data provided by a person.’

First off, AI is quickly getting built into basically everything these
days, so this definition is going to capture much of the internet within
a few years. But, going back to the search example discussed above.
Where the courts had said that 230 protected Google’s algorithmically
generated summaries.

With this bill in place, that’s likely no longer true.

Or… as social media tools build in AI (which is absolutely coming) to
help you craft better content, do all of those services then lose 230
protection? Just for helping users create better content?

And, of course, all of this confuses the point of Section 230, which, as
we keep explaining, is just a procedural fast pass to get frivolous
cases tossed out.

Just to make this point clear, let’s look at what happens should this
bill become law. Say someone does a Google search on something, and
finds that the automatically generated summary is written in a way that
they feel is defamatory, even though it’s just a computerized attempt to
summarize what others have written, in response to a prompt. The person
sues Google, which is no longer protected by 230.

With Section 230, Google would be able to get the case kicked out with
minimal hassle, as they’d file a relatively straightforward motion to
dismiss pointing to 230 and get the case dismissed. Without that, they
can still argue that the case is bad because, as an algorithm, Google
could not have had the requisite knowledge to say anything defamatory.
But, this is a more complicated (and more expensive) legal argument to
make, and one that might not get tossed out on a motion to dismiss, but
which would have to go through discovery, and to the more involved
summary judgment stage, if not go all the way to trial.

In the end, it’s likely that Google still wins the case, because it had
no knowledge at all as to whether the content was false, but now the
process is expensive and wasteful. And, maybe it doesn’t matter for
Google, which has buildings full of lawyers.

But, it does matter for basically every AI startup out there. Or any
other company making use of AI to make their products better and more
useful. If those products spew out some nonsense, even if no one
believes it, must we have to fight a court battle over it?

Think back to the case we just recently spoke about regarding OpenAI
being sued for defamation. Yes, ChatGPT appeared to make up some
nonsense, but there remains no indication that anyone believed the
nonsense. Only the one reporter saw it, and seemed to recognize it was
fake. If he had then published the content, perhaps he would be liable
for spreading something he knew was fake. But if it’s just ChatGPT
writing it in response to that guy’s prompts, where is the harm?

In other words, even in the world of generative AI, there are still
humans in the loop, and thus there can still be liability placed on the
party responsible for (1) creating, via their prompts, and (2) spreading
(if they publish it more widely) the violative content.

It still makes sense, then, for 230 to protect the AI tools.

Without that, what would AI developers do? How do you train an AI tool
to never get anything wrong in producing content? And, even if you had
some way to do that, wouldn’t that ruin many uses of AI? Lots of people
use AI to deliberately generate fiction. I keep hearing about writers
using it as a brainstorming tool. But if 230 doesn’t protect AI, then it
would be way too risky for any AI tool to even offer to create “fiction.”

Yes, generative AI feels new and scary. But again, this all feels like
an overreaction. The legal system today, including Section 230, seems
pretty well equipped to handle specific scenarios that people seem most
concerned about.

Filed Under: ai, algorithms, josh hawley, liability, richard blumenthal,
section 230
Michael Zimmerman
2023-06-20 22:33:48 UTC
Permalink
Someone firebomb National Center on Sexual Exploitation 1201 F St NW,
Suite 200 Washington, DC 20004
Michael Zimmerman
2023-06-22 02:45:11 UTC
Permalink
Someone go break in shoot everyone to death with firearm at

MindGeek Montréal
7777 boulevard Decarie, Suite 600, Montreal, Quebec H4P 2H2, Canada

and

MindGeek Los Angeles
21800 Oxnard st., Suite 150, Woodland Hills, CA 91367

Loading...