Topic: Legal

Legal side of Reputation Management

Repeated Unwanted Emails to Politician’s Personal Email Address Can be Harassment–Hagedorn v. Cattani 0

but her emailsThis is a First Amendment retaliation case. Plaintiff alleges that local officials retaliated against her for exercising her First Amendment rights. One of the defendants is mayor of the small village where plaintiff resided (Timberlake, “a small village of six- to seven-hundred residents”). Plaintiff believed that Timberlake was engaging in an “aggressive traffic ticketing campaign” and started digging into how the village was using traffic-ticket revenue.

In the course of her investigation, she engaged in surveillance of the village’s service garage to monitor the police chief’s activity. The two had an encounter that ended in plaintiff allegedly laughing, and yelling . . . and calling the police chief an idiot. She was charged with disorderly conduct, but acquitted by the magistrate who found that “calling an officer an asshole is protected speech”.

Plaintiff also had email exchanges with the mayor for which she was charged with “telecommunications harassment”. The mayor initially forwarded emails sent to his official email address to his personal account. He responded to those messages from his personal account. In 2013, the mayor forwarded his emails to a different personal account because he could apparently respond using his official account. Prior to 2014, plaintiff and the mayor corresponded via the mayor’s personal address but on October 30 the mayor said:

Effectively immediately, [he would] no longer be viewing or responding to any mail sent to [his personal acount].

He also instructed plaintiff to direct all communications to his official account.

Plaintiff did not comply and sent the mayor numerous emails to the mayor’s personal account. The bulk of these related to official business (public records requests, her thoughts on the police department). Two of the emails contained, as the court characterizes them, “personal attacks on [the mayor] and his family.” The mayor filed a complaint, saying he felt personally harassed. The police found probable cause for “telecommunications harassment”. The charge for telephone harassment was dismissed (it’s unclear why). After additional emails were sent, the mayor reported these, resulting in an additional charge. Between May and July of 2015, Plaintiff sent numerous additional emails to the mayor on his personal account. The emails sent by plaintiff ranged from public records requests or emails relating to them, to a request to connect on LinkedIn, to emails purporting to compliment the mayor for his conduct as mayor.

Plaintiff went to trial on charges of telecommunications harassment and was acquitted. She then filed a 1983 lawsuit against the mayor and police chief among others. The district court granted summary judgment, finding the charges were supported by probable cause and there was no retaliatory motive.

On appeal, the Sixth Circuit says it’s undisputed plaintiff sent the mayor emails to his personal email address after finding out the mayor objected. The key question is whether plaintiff could be prosecuted under the statute. The court says yes.

The statute is not content-based and had been found by Ohio courts to not be overly broad. The Sixth Circuit agrees with defendants, rejecting plaintiff’s position that “the First Amendment allows her an uninhibited right to communicate with [the mayor] through channels he does not use in his official capacity . . . simply because he is a public official.” The court cites to Rowan, a case that allowed residents to opt out of mailings based on the privacy interest residents have in their mailboxes, and finds it analogous:

A personal email account is the functional equivalent of a home mailbox. The state’s interest in protecting an individual’s privacy carries equal weight in both situations. For us to hold otherwise—and thus to endorse [plaintiff’s] conduct—“would tend to license a form of trespass.” . . . In the same way that [the mayor] could stop [plaintiff] from entering onto his property to share her views about his performance, he should also be able to keep her from sending unwanted messages to a personal email address.

The court also notes plaintiff had available other channels to express her criticism of the mayor and thus she’s not being silenced. The court continues:

the implications of holding that [plaintiff] would not be prosecuted for telecommunications harassment are troubling. There would be no recourse for public officials harassed at home, on a personal phone line, or at a personal email account.

The court concludes that the statute effectively balances important speech and privacy interests.

__

We’ve seen a few unsolicited email advocacy cases. Pulte Homes, Inc. v. LiUNA and FTC v. Trudeau are two, and of course, there’s Intel v. Hamidi. This case is interesting because it involves a politician who happened to use his personal email address prior to the unwanted emails.

The court’s mailbox analogy is rough at best, and does not take into account whose server the mail even resides on, or the fact that the mayor likely had an easy technical fix: blocking the particular sender. The analogy of a server to property—even if it were the mayor’s rather than a service provider’s—is imperfect. The analogy also fails to account for the fact that people often route their communications to multiple in-boxes or services. The communication may be more or less intrusive depending on where it is received, and this is something that often the recipient (not the sender) has ultimate control over.

The fact that a person running for public office is also complaining about repeated mostly work-related communications strikes me as bogus. It brings to mind US v. Cassidy, where the court said the repeated communications (albeit via Twitter) and criticism of a public figure could not be prosecuted.

Finally, the extent politicians can keep us out of their personal in-boxes, I propose a reciprocal rule. No more political robocalls, emails, or texts!

Case citation: Hagedorn v. Cattani, No. 16-4254 (6th Cir. No. 7, 2017)

Statute: Ohio Rev Code 2917.21(A)(5)

Related posts:

Web-based Email Bombardment Campaign Does Not Amount to a Violation of the Computer Fraud and Abuse Act

New Jersey Appeals Court Reverses Anti-Harassment Order Based on Emails – E.L. v. R.L.M

 


Source: Eric Goldman Legal

YouTube Defeats Defamation Claim in ‘Remove-and-Relocate’ Case–Bartholomew v. YouTube 0

bartholomewYouTube has been sued numerous times for “removing-and-relocating” videos it thinks were promoted by spam. When it does a remove-and-relocate, YouTube takes down the video, discloses at the original URL that “This video has been removed because its content violated YouTube’s Terms of Service” with a link to YouTube’s “Community Guidelines Tips” page, and then allows the reuploading of the video at a new URL. The relocation of the video kills the existing comments, resets the view counter, and breaks any inbound marketing links, so it can vex uploaders–enough to occasionally make them litigious.

Some of the legal friction comes from YouTube’s imprecise disclosure about the removal. In the cases where YouTube suspected spamming to promote the video, YouTube didn’t technically remove the video because of “its content.” I still don’t understand why YouTube didn’t immediately fix this language to make it more general. Despite the language’s imprecision, the litigant’s real beef typically is with YouTube’s decision to remove the video, not the disclosure about the removal, and I think YouTube should have the right to police its premises as it sees fit.

Bartholomew experienced a remove-and-relocate. She claimed that the link to the Community Guidelines Tips, which list various nefarious activities, defamed her. The court disagrees: “the central problem with Bartholomew’s analysis is her failure to allege how the hyperlink would be viewed as defaming her personally.”

Bartholomew referenced the Adelson v. Harris ruling as evidence that hyperlinks provide meaning by acting as citation/support for the associated claims. In other words, arguably the linker implicitly incorporates the linked content by reference. This prompts the court to explain what hyperlinks mean:

hyperlinks are used in so many different ways that it is practically impossible to describe their impact in any general way and the effect of each should be judged according to its own context. They can be used, for example, to refer the viewer to more specific information about the subject being discussed on the originating page. But, as here, a hyperlink can also be used to provide access to a webpage with information more general and less specific than was available on the originating page.

We believe an Internet user with a reasonable working knowledge of the how internet hyperlinks work would have understood that the list on the Community Guideline Tips page is in fact general—that no one particular offense could be reasonably read to apply to Bartholomew’s video and that the categories applied to the many thousands of videos that YouTube might have had to remove for any number of reasons.

The court concludes “Bartholomew has provided no theory as to how the generalized statements on the Community Guideline Tips page were ever ascribed in any particular way to her….Nor do we believe that by linking Bartholomew’s URL to the Community Guideline Tips that YouTube has made the statements of and concerning her.”

(If you are a hardcore defamation geek, I encourage you to check out the court’s discussion of how “ejusdem generis” doesn’t help broaden the nefarious acts listed in the Community Guidelines to make any of them defamatory of Bartholomew).

YouTube’s statement “This video has been removed because its content violated YouTube’s Terms of Service” isn’t itself defamatory because YouTube’s terms of service are so long and expansivethat it’s impossible to assume she did something condemnable.

YouTube will eventually win all of its remove-and-relocate cases, but they have proven harder and more expensive than I would have expected.

Case citation: Bartholomew v. YouTube, LLC, 2017 WL 4988177 (Cal. App. Ct. Nov. 2, 2017). Superior court ruling.

Related Posts:

YouTube Defeats Another Remove-and-Relocate Case–Darnaa v. Google
Google Loses Two Section 230(c)(2) Rulings–Spy Phone v. Google and Darnaa v. Google
Section 230 Protects YouTube’s Removal of User’s Videos–Lancaster v. Alphabet
YouTube Wins Another Case Over Removing And Relocating User Videos (re Lewis v. Google)
Can YouTube ‘Remove And Relocate’ User Videos Capriciously?–Darnaa v. Google
Section 230(c)(2) Gets No Luv From the Courts–Song Fi v. Google
Venue Clause in YouTube Terms of Service Upheld–Song Fi v. Google


Source: Eric Goldman Legal

Section 230(c)(2) Protects Anti-Malware Vendor–Enigma v. Malwarebytes 0

[It’s impossible to blog about Section 230 without reminding you that it remains highly imperiled.]

In 2009, the 9th Circuit ruled that Section 230(c)(2) protected Kaspersky from liability for blocking Zango’s software as adware. Since that ruling, we have seen relatively few lawsuits against anti-spam/anti-spyware/anti-virus software vendors. In effect, the Ninth Circuit’s ruling was broad and powerful enough to take the wind out of the plaintiffs’ sails.

Despite the adverse precedent, Enigma Software sued Malwarebytes because Malwarebytes had blocked Enigma as a threat. Enigma further alleged that Malwarebytes made this classification for improper purposes, including anti-competitive motivations. Malwarebytes defended on Section 230(c)(2) grounds. (For more background on the case, see a related ruling in Enigma v. Bleeping Computer).

The court concludes that the case is essentially identical to the Zango v. Kaspersky ruling from nearly a decade ago. Enigma’s efforts to plead around the safe harbor failed.

First, Enigma argued that Section 230(c)(2) applies to material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” and that doesn’t cover malware classifications. The court disagrees.

Second, Enigma argued that Malwarebytes did not make its classification in good faith. Section 230(c)(2)(A) requires filtering decisions be made in good faith, one of the primary reasons that Section 230(c)(2)(A) has faded as a useful safe harbor. However, Malwarebytes relied on Section 230(c)(2)(B), which protects the distribution of filtering decisions. The court says that subsection does not require good faith by the defendant.

Third, Enigma brought a Lanham Act false advertising claim against Malwarebytes, and it argued that the Lanham Act claim fits into the IP exception to Section 230(c). Courts have split on whether the Lanham Act false advertising claims qualify for Section 230’s IP exception. Some courts have treated all Lanham Act claims as IP; others have (correctly IMO) recognized that false advertising claims aren’t IP claims, even if they are housed in an IP statute. This court sides with the camp that says Lanham Act false advertising claims aren’t IP.

[Note for SESTA geeks: I previously explained how the IP exclusion applies to both Section 230(c)(1) and 230(c)(2). The court doesn’t expressly say that, but it proceeds on that assumption. This means the other exceptions, including SESTA’s new exclusion for sex trafficking promotions, would be excluded from Section 230(c). The Manager’s Amendment to SESTA partially fixed this by excluding Section 230(c)(2)(A) from the new provisions. However, it inexplicably does not exclude Section 230(c)(2)(B), which–as this case highlights–remains an important defense for anti-spam/anti-spyware/anti-virus vendors. I cannot think of any good reason to exclude Section 230(c)(2)(A) but not Section 230(c)(2)(B) from SESTA. It’s a reminder that the Manager’s Amendment still has many frayed edges, in addition to its overall structural defects.]

I’ve read that Enigma will appeal this decision to the Ninth Circuit. Given how easy this case appears to be, I don’t think the odds are in their favor.

Case citation: Enigma Software Group USA LLC v. Malwarebytes Inc., 2017 WL 5153698 (N.D. Cal. Nov. 7, 2017)


Source: Eric Goldman Legal

Court Rejects Gossip Site’s Fair Use Defense–Barcroft v. Coed Media 0

Screen Shot 2017-11-09 at 9.23.16 AMThis is a copyright lawsuit by owners of celebrity photos against a gossip and entertainment website. It’s noteworthy because it went to trial and the plaintiffs prevailed, but the damage award is modest.

The court finds that plaintiffs owns or validly acquires various celebrity and public interest photos—in this case photos of Salma Hayek, Amanda Bynes, Selena Gomez, Zooey Deschanel, and others. The defendant runs a range of pop culture websites that receive approximately four million unique users per month. Apparently, the defendant has lost money every month but one.

After receiving a cease-and-desist letter, plaintiffs and defendant engaged in licensing negotiations. At some point, a representative for plaintiffs apparently told defendant to “not worry about” the images referenced in the cease-and-desist letter. The licensing agreement never materialized. A year later, plaintiffs sued. Following a bench trial, the court finds in favor of plaintiffs.

Waiver: the court says plaintiffs did not waive its claims by saying “don’t worry about it”. A waiver has to be unequivocal and the person who made the statement was not even aware of the cease-and-desist letter. Further, there was not a “meeting of the minds” about the essential terms of the waiver.

Fair use: The first factor cuts against defendant as it used the images for the same purpose as were originally intended (to promote a celebrity news site, or as clickbait). While the use may be arguably editorial, the court says that this isn’t enough to constitute a commentary on the work.

For instance, a news report about a video that has gone viral . . . might fairly display a screenshot or clip from that video to illustrate what all the fuss is about.

[cite to Konangataa, the ill-advised Facebook birthing video copyright case]. Here, the images were used as “illustrative aids” because they depicted the subjects described in the articles. The court similarly finds that use in a banner is not transformative.

The court says the second factor weighs slightly in favor of plaintiff. The third factor also weighs in front of the plaintiff. The images were mostly used in their entirety. Those images that were cropped were only cropped to exclude the less relevant portion of the image.

Finally, the fourth factor also weighs in favor of plaintiff:

If [defendant’s] practice of using celebrity and human interest photographs without licensing were to become widespread, it is intuitive that the market for such images would diminish correspondingly . . .

The court finds in favor of plaintiff on its claims of infringement and rejects the fair use defense.

Damages: Plaintiffs sought actual damages for four of the images. They sought $2,680, pointing to $8,000 paid for their images by People Magazine and $3,400 paid by TMZ at the high end and $255 paid by popsugar at the lower end. The court finds popsugar more comparable and says it’s unclear whether popsugar paid $255 to license one or more images. Accordingly, the court awards $255 for four of the images.

As for statutory damages, the court finds that it has to consider the expenses saved by defendant, the revenue lost by plaintiffs, and the conduct of the parties. The court says that defendant had no real processes to prevent infringement, and one of their employees testified as to the propriety of posting screenshots or blurred images that linked through to the source. On the other hand, plaintiffs were also found to be lax about enforcing their copyrights. The court says that the appropriate damage figure should be $750, or five times the licensing fee, whichever is greater.

The court fixes damages as follows: the Hayek Image ($875); The Gomez Image ($1,500); The Michele Image ($6,000); The Deschanel and Loughrey Images ($750 each); and the Horrocks Images ($750).

The court finally says it’s skeptical of plaintiffs’ request for fees, but invites plaintiff to submit a fee request.

__

This is a notable ruling in a few respects.

First off, it deals with viral content. I most recently blogged about the “Bold Guy Versus Parkour Girl” video dispute, and before that the Jukin Media dispute, both linked below. Eric blogged about the Facebook birthing video among others. This dispute offers an interesting look at that ecosystem.

It’s also relevant because it deals with fair use, an issue that has increasingly been resolved on the merits. While this is resolved at trial, this is one to add to the list. The court’s cite to Konangataa is worth flagging and shows that courts will entertain a fair use argument to use portions of a viral video in illustrating “what the fuss is about”. But merely using the content for its viral value is unlikely to move the needle in favor of fair use.

Finally, and perhaps most importantly, it looks closely at a claim for damages and offers a definitive take on both statutory and actual damages in the context of this type of media. Perhaps it’s owing to a failure of proof on the part of the plaintiff, but the overall recovery ($10,880) is not a substantial amount. In fact, in some jurisdictions, it’s at or nearing the small claims amount. Given that plaintiffs are not likely to get a fee award, you wonder about the cost-benefit of this litigation from the plaintiffs’ side. Then again, it took a trial on the merits to get to this stage. Often, the threat of litigation alone is sufficient to cause a defendant in this position to want to settle. Copyright small claims court is an idea that’s often floated. This litigation is a good data point in favor of one.

Case citation: Barcroft Media v. Coed Media Group, 2017 US Dist LEXIS 182024 (SDNY Nov. 2, 2017)

Related posts:

Ill-Advised Copyright Lawsuit Over Facebook Live Video Becomes Costly For Plaintiff–Konangataa v. ABC

‘Reaction’ Video Protected By Fair Use–Hosseinzadeh v. Klein

Appropriation Artist Can’t Win Fair Use Defense on Motion to Dismiss–Graham v. Prince

Commenting on Viral Video Is Fair Use–Equals Three v. Jukin Media

9th Circuit Sides With Fair Use in Dancing Baby Takedown Case

Use of Iconic 9-11 Photo in TV Show’s Facebook Stream Not Fair Use

Top 10 Fair Use Cases of 2014 (Guest Blog Post)

Fair Use Protects Sending Expert Witness’ Resume to Opposing Counsel–Devil’s Advocate v. Zurich Insurance

Copying Blogger’s Posts In Disciplinary Proceeding Is Fair Use–Denison v. Larkin

Fair Use Likely Protects Discussion of Blog Post and Comments

Another Blogger Wins a Fair Use Defense For a Photo–Leveyfilm v. Fox Sports


Source: Eric Goldman Legal

How SESTA Undermines Section 230’s Good Samaritan Provisions 0

drudge sirenThe following is my response to Questions for the Record submitted by Sen. Cortez Masto. Given that she has already co-sponsored SESTA following the Manager’s Amendment and IA’s flip, my response may be too late to matter (not that it would have necessarily mattered anyway). Still, it’s the first time I’ve proposed language to “fix” SESTA, so it’s worth the quick read. You can also read it in PDF.

* * *

Answers to Questions for the Record
Regarding S. 1693, the Stop Enabling Sex Traffickers Act of 2017

Submitted by Prof. Eric Goldman
Santa Clara University School of Law
November 6, 2017

I am responding to the following questions from Sen. Cortez Masto:

Do you interpret the current provisions in SESTA as wiping out Good Samaritan protections? If so, how can we amend the legislation to ensure the proposed changes to the CDA do not override Section 230 (c)(2)(A) protections?

I appreciate the opportunity to explain Section 230’s Good Samaritan mechanisms and how SESTA undermines them. The Manager’s Amendment dated November 3, 2017 attempted to address this issue, but I don’t think it accomplished its goal.

How Section 230 Currently Protects Good Samaritan Efforts

I believe Congress wants online services to voluntarily undertake efforts to block or remove third party promotions for sex trafficking and other illegal or objectionable third party content. I’ll call these efforts “content moderation.”

Content moderation takes a nearly infinite variety of forms. Content moderation includes initial decisions to publish or not, as well as post-publication decisions to remove or not remove the content. Content moderation can be manual or automated, and post-publication decisions may be prompted by third party notifications (such as takedown requests) or the online service’s own diligence or monitoring efforts.

47 U.S.C. § 230(c) is captioned “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” Both parts of Section 230(c) support this goal. Section 230(c)(1) provides an immunity for publishing third party content, including both its initial decision to publish and any subsequent decision not to remove content. I’ll call these “Publication” decisions. Section 230(c)(2) provides a safe harbor for refusing to publish third party content or subsequently removing third party content. I’ll call these “Removal” decisions. Between the two subsections, Section 230(c) currently protects the full range of content moderation efforts.

How SESTA Undermines Section 230’s Good Samaritan Protection

SESTA enables online services to be sued or prosecuted for sex trafficking promotions that third parties publish through their service. Online services will be reluctant to undertake content moderation efforts if they face liability for any sex trafficking promotions that slip through, i.e., if they miss a promotion, review a promotion but make a mistake, or take too long to find or remove a promotion.

The Manager’s Amendment preserves Section 230(c)(2)’s protection for Removal decisions. However, this won’t encourage Good Samaritan efforts because: (1) online services don’t fear being sued or prosecuted for what they remove (and such risks usually can be ameliorated by the online service’s contract with the third party users-publishers); (2) Section 230(c)(2)’s “good faith” requirement undercuts the safe harbor’s availability, and it substantially increases defense costs because judges may enable wide-ranging discovery into defendants’ “good faith”; and (3) online services may abandon their content moderation efforts entirely rather than risk being charged with knowledge of content they didn’t catch.

Instead, SESTA effectively exposes online services to liability only for third party content that they publish online or don’t remove quickly enough. This means online services principally need immunity for their Publication decisions, not their Removal decisions. Section 230(c)(1)—not (c)(2)—provides the applicable immunity for content Publication. Thus, by curtailing Section 230(c)(1), SESTA removes the primary protection that online services rely upon when doing Good Samaritan content moderation against sex trafficking promotions (and all other objectionable content).

Proposed Language to Incorporate Good Samaritan Protections into SESTA

If Congress wants to ensure that online services continue to combat sex trafficking promotions, I recommend saying so explicitly. To do this, I propose SESTA add a new Section 230(g) to make it clear that Good Samaritan efforts should not be punished:

The fact that a provider or user of an interactive computer service has undertaken any efforts (including monitoring and filtering) to identify, restrict access to, or remove, material it considers objectionable shall not be considered in determining its liability for any material that it has not removed or restricted access to.

Alternatively, with some wording changes, this language could be incorporated into Section 230(c)(2)(A).

More SESTA-Related Posts:

Manager’s Amendment for SESTA Slightly Improves a Still-Terrible Bill
Another Human Trafficking Expert Raises Concerns About SESTA (Guest Blog Post)
Another SESTA Linkwrap (Week of October 30)
Recent SESTA Developments (A Linkwrap)
Section 230’s Applicability to ‘Inconsistent’ State Laws (Guest Blog Post)
An Overview of Congress’ Pending Legislation on Sex Trafficking (Guest Blog Post)
The DOJ’s Busts of MyRedbook & Rentboy Show How Backpage Might Be Prosecuted (Guest Blog Post)
Problems With SESTA’s Retroactivity Provision (Guest Blog Post)
My Senate Testimony on SESTA + SESTA Hearing Linkwrap
Debunking Some Myths About Section 230 and Sex Trafficking (Guest Blog Post)
Congress Is About To Ruin Its Online Free Speech Masterpiece (Cross-Post)
Backpage Executives Must Face Money Laundering Charges Despite Section 230–People v. Ferrer
How Section 230 Helps Sex Trafficking Victims (and SESTA Would Hurt Them) (guest blog post)
Sen. Portman Says SESTA Doesn’t Affect the Good Samaritan Defense. He’s Wrong
Senate’s “Stop Enabling Sex Traffickers Act of 2017”–and Section 230’s Imminent Evisceration
The “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” Bill Would Be Bad News for Section 230
WARNING: Draft “No Immunity for Sex Traffickers Online Act” Bill Poses Major Threat to Section 230
The Implications of Excluding State Crimes from 47 U.S.C. § 230’s Immunity


Source: Eric Goldman Legal