Topic: Legal

Legal side of Reputation Management

Lawsuit Against Ripoff Report Dropped After Discovery–Vision Security v. Xcentric (Guest Blog Post) 0

By guest bloggers Jeffrey J. Hunt and Rachel Lassig Wertheimer

[Eric’s introduction: this post is written by lawyers who represented Ripoff Report in one of their multitudinous lawsuits. Because the authors were also advocates in this case, you might assume this writeup is just another piece of advocacy. Even so, I’ve decided this post is worthwhile for two reasons. First, it helpfully recaps the complicated denouement to one of the more troubling recent rulings against Ripoff Report. Second, and more importantly to me, the denouement offers valuable insights into the wisdom of suing Ripoff Report. The way I interpret things, Vision Security got a favorable Section 230 ruling that enabled discovery. However, after discovery, Vision Security effectively abandoned its case. So what kind of message might this send to other plaintiffs thinking about suing Ripoff Report? You’re likely to lose on Section 230 grounds; but if you’re lucky enough to overcome that, you’ll spend a fair amount of money on discovery to possibly realize that the case still isn’t worth it. So although the denouement in this case isn’t a citable opinion, I think it’s useful “precedent” nonetheless.]

Last year, a Utah federal court dismissed a defamation case brought against the popular consumer-review website www.ripoffreport.com (“Ripoff Report”) after the plaintiff conceded that Ripoff Report was entitled to immunity from suit under the federal Communications Decency Act.

The impetus for the case was a post on Ripoff Report authored by a former sales representative (“sales rep”) of Vision Security, LLC, which marketed home security systems to consumers.  The sales rep claimed that Vision Security did not treat its employees fairly and engaged in deceptive sales practices.  Rather than sue the sales rep, Vision Security sued Xcentric Ventures, LLC (“Xcentric”), the company that operates Ripoff Report, for defamation and related claims, based upon the sales rep’s post.

Xcentric subsequently moved to dismiss the suit, citing Section 230 of the federal Communications Decency Act (“Section 230”), 47 U.S.C. § 230, which provides website providers like Xcentric immunity from suit for material posted on their websites by third-parties.  The United States District Court for the District of Utah, however, denied Xcentric’s motion.  The court noted that, under Tenth Circuit law, website providers may be held liable for content posted by a third-party if the provider “in some way specifically encourages development of what is offensive about the content.”  F.T.C. v. Accusearch Inc., 570 F.3d 1199 (10th Cir. 2009).  (The court’s ruling on the motion to dismiss was the subject of a previous article published on this blog: Eric Goldman, Another Tough Section 230 Ruling for Ripoff Report – Vision Security v. Xcentric, Tech. and Marketing L. Blog (Sep. 20, 2015).)

The court then determined that, assuming the truth of Vision Security’s allegations — as required at the motion to dismiss stage — there was a reasonable inference that Ripoff Report encouraged negative content, and therefore, Xcentric may not be entitled to Section 230 immunity.

First, the court pointed to some of the taglines on Ripoff Report, including “By Consumers, for consumers,” “Don’t let them get away with it.  Let the truth be known,” and “Complaints Reviews Scams Lawsuits Frauds Reported, File your review.  Consumers educating consumers.”  Second, Vision Security had alleged that the sales rep told Xcentric that the statements in his post were false and asked that the post be removed.  Third, Vision Security had alleged that Xcentric’s webmaster told Vision Security that positive posts about a company are not allowed and that under no circumstances will postings be removed from Ripoff Report.  Finally, the court pointed to allegations regarding Xcentric’s Corporate Advocacy Program where, allegedly for “a large fee,” a company with negative postings like Vision Security could “find a satisfactory solution” to offensive content posted about them on Ripoff Report.  In the court’s view, these allegations supported a reasonable inference that Xcentric “had an interest in, and encouraged, negative content” in order to promote its Corporate Advocacy Program.

After the court’s ruling, however, the parties engaged in discovery and Xcentric filed a Motion for Summary Judgment, asserting that there were no genuine issues of material fact that prevented the application of Section 230 Immunity.  The crux of Xcentric’s argument was that under Accusearch, a website provider only loses its Section 230 immunity for content posted by third-parties if the provider “in some way specifically encourages development of what is offensive about the content,” 570 F.3d at 1199 (emphasis added), what Vision Security alleged was offensive about the content of the sales rep’s post was that it was false and defamatory, and there was no evidence that Xcentric in any way encouraged the sales rep (or other third-party users) to post false and defamatory statements on Ripoff Report.

In Accusearch, the offending content at issue was the confidential personal information of several individuals, including their telephone records, obtained from third parties and posted by Accusearch on its website.  See id. at 1199.  The court held that Accusearch was liable —stripped of its Section 230 immunity — because it “solicited requests for confidential information protected by law, paid researchers to find it, knew that the researchers were likely to use improper methods [to obtain the confidential information], and charged customers who wished the information to be disclosed.”  Id. at 1199, 1201.

In contrast, in its previous decision in Ben Ezra, Weinstein, & Co., Inc. v. Am. Online Inc., 206 F.3d 980, 983 (10th Cir. 2000), the court upheld America Online’s Section 230 immunity after it was sued for posting incorrect information regarding the plaintiff’s stock price and share volume purchased from a third-party vendor.  The “offending content” at issue in Ben Ezra was the inaccuracies in the stock price and share volume quotations, and, unlike in Accusearch, “America Online did not solicit the errors; indeed it sent the vendor emails requesting that it ‘correct the allegedly inaccurate information.’”  Accusearch, 570 F.3d at 1199 (quoting Ben Ezra, 206 F.3d at 985)).  In other words, “America Online had done nothing to encourage what made the content offensive—its alleged inaccuracy…[and] was therefore not responsible for the offensive content.”  Accusearch, 570 F.3d at 1199-1200.

Xcentric’s summary judgment motion argued that, as in Ben Ezra and unlike in Accusearch, there was no evidence that Xcentric in any way encouraged the sales rep to post false or defamatory statements on Ripoff Report and, therefore, Xcentric was not responsible for the “offending content” at issue.  Instead, the uncontroverted evidence showed the opposite—that Xcentric makes every effort to ensure that third-party content posted on Ripoff Report is not false and defamatory.  Among other things, Xcentric requires users to agree — two separate times before posting a report — to post only information that is truthful and accurate.

Further, Vision Security’s allegations that Xcentric’s webmaster told Vision Security that positive posts about a company are not allowed and that Vision Security’s only option for addressing a negative report like the sales rep’s post was to pay a “large fee” to join the Corporate Advocacy Program were not true.  Email exchanges between Xcentric and Vision Security, uncovered during discovery, showed that Xcentric made it clear to Vision Security on multiple occasions, and prior to the filing of the complaint in this case, that positive posts about or from a business are not only allowed, but encouraged and free of charge.  And Xcentric’s owner specifically encouraged Vision Security to post a free rebuttal to the sale rep’s post, but Vision Security chose not to do so.

Likewise, Vision Security’s allegation that its only option for addressing a negative report like the sales rep’s post is to pay a “large fee” to join Xcentric’s Corporate Advocacy Program was also untrue.  The uncontroverted evidence submitted on summary judgment demonstrated that businesses have several options for addressing negative reports, including posting a positive (and free) rebuttal, suing the author of the report, and using Xcentric’s VIP Arbitration program.  If a company sues the author directly, they may attach any court findings or judgment as a rebuttal to a negative report (again, at no charge).  And where a court makes considered findings based on evidence, Xcentric may redact false statements from the report.

Under Xcentric’s VIP Arbitration program, for a minimal fee used largely to pay for expenses, an arbitrator reviews evidence provided by both parties and, if the arbitrator determines the report contains false statements of fact, Xcentric will redact those portions of the report.  Finally, Xcentric’s Corporate Advocacy Program (“CAP”) is also available for businesses who need and wish to fully rehabilitate their online reputation.  With Xcentric’s help, CAP members commit to resolve each and every complaint on Ripoff Report to the customer’s satisfaction, undergo an investigative review of their business operations to help them detect the potential source of complaints, and provide an explanation to Xcentric of the changes the business has made to its operations to avoid future complaints.

As for Ripoff Report’s taglines, Xcentric argued that, contrary to Vision Security’s allegations, they do not encourage third-party users to post false and defamatory content on Ripoff Report.  Instead, they discourage it. One of Ripoff Report’s taglines states “Don’t let them get away with it. Let the truth be known” — an obvious invitation to post truthful content — and another states “Consumers educating consumers,” indicating that only statements that will serve to educate other consumers, i.e., true and accurate statements, are welcome.

Finally, Xcentric’s summary judgment motion argued that its refusal to remove negative reports is not evidence it encouraged the sales rep (or any other third-party user) to post false and defamatory content on Ripoff Report.  As an initial matter, Vision Security failed to adduce any evidence supporting its claim that the sales rep contacted Xcentric and requested that his post be removed from Ripoff Report.  More importantly, however, Xcentric’s policy of refusing to remove negative reports — which also applies to CAP members — ensures that consumers are not bullied, threatened, coerced, or bribed into recanting and also ensures that consumers reviewing reports on Ripoff Report are able to review both negative reports and positive rebuttals and make their own determinations as to a business’s conduct and commitment to customer service.  (Additionally, the clear weight of authority, including Tenth Circuit law, holds that a website provider’s failure or refusal to remove content, even at the request of the author and even if the content is potentially defamatory, is the exercise of a traditional editorial function and does not strip the provider of is immunity under Section 230.  See, e.g., Shrader v. Beann, 503 F. Appx. 650 (10th Cir. 2012).)

In conclusion, Xcentric argued that not only were several of Vision Security’s allegations demonstrably false — as demonstrated by the uncontroverted evidence submitted on summary judgment — there was no evidence that Xcentric in any way encouraged the sales rep to post false or defamatory statements on Ripoff Report and, therefore, Xcentric was entitled to Section 230 immunity.

After Xcentric filed its summary judgment motion, Vision Security conceded in a stipulation filed by the parties that it “is not aware of any genuine issues of material fact that would prevent the application of Section 230 Immunity to Vision Security’s claims, thus requiring dismissal of such claims” and that “Xcentric is entitled to judgment as a matter of law.”

Subsequently, the Utah District Court entered an Order Granting Stipulated Motion for Summary Judgment, in which it stated:

Defendant filed a separate motion for summary judgment . . . which argued for dismissal of all Plaintiffs’ claims because they were barred by the immunity granted to providers of interactive computer services under 47 U.S.C. § 230.  Upon review of the motion, Plaintiffs concluded that there existed no genuine disputes of material fact pertaining to the application of this immunity, and therefore determined to stipulate to the entry of summary judgment.  (Docket No. 89 at 2).  Thus, Defendant is entitled to summary judgment under Fed. R. Civ. P. 56(a).

Accordingly, the court granted summary judgment for Xcentric on all of Vision Security’s claims and dismissed the case with prejudice.

At the end of the day, Ripoff Report and websites like it fulfill an important societal function by allowing individual consumers with few resources to have a voice and cast light on questionable business practices.  As previous courts have observed, that is precisely why Congress enacted Section 230 to provide such website operators immunity from suit.

Jeffrey J. Hunt and Rachel Lassig Wertheimer are shareholders at Parr Brown Gee & Loveless, Salt Lake City, Utah, and, along with Maria Crimi Speth at Jaburg Wilk, Phoenix, Arizona, represented Xcentric Ventures, LLC.


Source: Eric Goldman Legal

How Is Texting a Dick Pic Like Masturbating in a Person’s Presence?–State v. Decker 0

My apologies for the indelicate headline. If you’re reading this because you’re hoping for some salacious insights regarding sexting, dick pics or masturbation, this post will disappoint you. An obvious protip: taking advice from a law professor on such topics is meshuggeneh.

The Facts

MJ was 14. She babysat for a couple. Decker was a housemate of that couple and a Facebook friend of MJ. One night around 1am, MJ and Decker exchanged Facebook messages. To me, this is the key exchange:

Decker @ 12:55 a.m.: Ok we’ll imam [sic] finished what I just started before I said hey
MJ @ 12:58 a.m.: what do you mean? [smiley face emoji] [FN]
Decker @ 12:59 a.m.: Just kinda [sic] a nightly ritual to stress before sleep
MJ @ 1:00 a.m.: what is?
Decker @ 1:00 a.m.: What I do before I sleep every night
MJ @ 1:01 a.m.: well what do you do?
Decker @ 1:02 a.m.: It’s embarrassing kinda [sic]

[FN] This is another example of how a court textually characterizes an emoji and, in doing so, creates additional ambiguity and leaves out important information. I discuss this phenomenon in my Emojis and the Law paper.

MJ thought Decker was meant smoking weed. It is implied, but not stated, that instead Decker was referring to masturbation before bedtime, although it’s also possible Decker meant that he sexted or engaged in online virtual sex with a partner before bedtime.

At 1:03 am, Decker sent MJ a dick pic. At 1:04, Decker sent this follow-up:

F–k nooopopooool sh-t
My bad damn dn
How do I delete damn
Sorry pops that was the phones fault
This g-d d-mn phone I’m so sorry was
chatting with an old friend sorry!!!!!.

A jury convicted Decker of fifth-degree criminal sexual conduct and indecent exposure. On the sexual conduct charge, the judge sentenced Decker to a year in prison with 10 months suspended (so a net prison sentence of 2 months assuming Decker met all of the conditions). Apparently the judge did not sentence Decker on the indecent exposure charge. The appeals court affirms.

The Appellate Court’s Analysis

Let’s look more closely at the sexual conduct conviction. The crime applies to defendants who “engage in masturbation or lewd exhibition of the genitals in the presence of a minor under the age of 16, knowing or having reason to know the minor is present.” As you can see, there are two references to “presence.” This raises the venerable Internet Law question about when virtual/online presence is equivalent to physical presence. The appellate court essentially ignores the decades of voluminous literature on Internet exceptionalism and the possible differences between physical and virtual presence.

Instead, the court conducts a mundane and uninspired typical appellate analysis. The court references the legislative history:

Because the legislature has amended the law to include a subsection that does not require touching for a conviction of fifth-degree criminal sexual conduct, we may presume that it intended to expand the definition of conduct that may support a conviction of that offense. And that expansion supports a broader definition of “present.”

The court cites an analogous Supreme Court precedent:

This interpretation is consistent with the Minnesota Supreme Court’s decision in State v. Stevenson, 656 N.W.2d 235, 239 (Minn. 2003). In Stevenson, the supreme court interpreted the phrase in subsection 2, “in the presence of a minor,” to require “only . . . that the accused’s conduct be reasonably capable of being viewed by a minor.” Thus, the supreme court upheld the defendant’s conviction of attempted fifth-degree criminal sexual conduct based on his act of masturbating in a truck parked near a playground where children were playing, even though the children did not actually view the defendant’s conduct.

The court then briefly turns to policy:

Noncontact sex offenses with a child may act as a precursor to actual sexual contact or change a child’s views of sex and sexual relationships….Thus, the legislative policy that supports protecting children from an actor’s explicit sexual behaviors in their physical presence supports shielding them from such conduct in the virtual world as well.

The court seems uninterested in the possibility that Decker mistakenly sent MJ the message:

it is undisputed that Decker was aware of M.J.’s age based on his previous acquaintance with her and that he specifically directed his communication toward her, knowing that he was sending an explicit photo to a 14-year-old. His reaction immediately after sending the photo also shows that he was aware of what he had done.

The court adds that the technology’s operation seems to support its conclusion:

the photo was sent in the context of a continuing conversation when Decker and M.J. were both viewing their phones. And only one minute elapsed between when Decker took the photo and when it reached M.J.’s phone.

Though the lower court didn’t impose a separate sentence for the indecent exposure conviction, the court says these technology facts are sufficient to uphold that conviction too.

Implications

I’ll start with the obvious: sending dick pics is almost never a good idea. The opinion doesn’t say who was the alternate intended recipient of Decker’s dick pic and whether that person would have welcomed it. Even if so, as we’ve seen so many times, the recipient of voluntarily shared pornography can easily turn on the sender; or the recipient (or sender) can be hacked and the images can leak online anyway. And if the alternate intended recipient hadn’t explicitly requested the dick pic, it’s likely the recipient would have been nonplussed by its receipt *at best*.

The case is another reminder that you should ALWAYS check, double-check, and triple-check who you are messaging–ESPECIALLY when the message contains anything remotely sensitive. Most of us have had that awful pit-in-the-stomach feeling when we’ve replied to all with a snarky or confidential comment instead of sending a direct reply to the sender. We go through the five stages of grief very quickly, just as Decker’s 1:04 message signaled. As the modern maxim goes: “Dance Like No One is Watching; Email Like It May One Day Be Read Aloud in a Deposition.” Giving Decker the benefit of the doubt, he is guilty of being sloppy about juggling simultaneous Facebook conversations while circulating a highly sensitive image very late at night. The law professor says: this is “not recommended.”

We don’t have enough information to judge whether Decker really did make a mistake in the intended recipient of the dick pic. His 1:04 message, sent so quickly after sending the dick pic, provides some evidence that Decker did not mean to send MJ the dick pic. However, that could have been a pretext. As the court suggests, sending a dick pic to a minor can be part of grooming the minor for sex, especially in the context of the Decker-MJ Facebook chatter that was vaguely flirty. The jury presumably heard much more evidence about Decker’s relationship with MJ and possibly the intended recipient, and the jury’s conclusion suggests they didn’t believe Decker. That deserves some deference.

Still, I’m troubled by this ruling. First, if Decker really did accidentally misdirect the dick pic, he will be sitting in jail for 2 months for that mistake. That seems like a harsh outcome–if it was an accident. Second, he wasn’t prosecuted for disseminating pornography to minors, he was prosecuted for showing his penis in MJ’s “presence.” For me, equating the two crimes makes no sense. The exposure to a person in physical space is far more graphic than a dick pic online–at minimum, there is much more information (such as facial expressions) communicated in person than in a tightly cropped dick pic. Further, an in-person encounter contains the implied threat (even if unlikely) of potentially imminent violence and sexual abuse that are wholly lacking online. (This made me think of the Drahota and R.D. cases involving the physicality of “fighting words”). There are also the possibilities of awkward eye contact and uncertainty about how to physically retreat from the situation. These attributes would all be true for the person masturbating near a school, even if the kids don’t see him, so the court could have distinguished the Supreme Court precedent if it wanted to. I might understand if the court made a technological distinction between a Facebook live video (sent to a minor) and a static photo, but the court’s analysis would make no such distinction.

I’m not sure if Decker will appeal this case to the Minnesota Supreme Court, but it seems like a good case for such an appeal. At minimum, I would hope the Supreme Court would look more carefully at the policy implications of physical vs. virtual presence. We might have thought that issue had been worked out by 2017, but apparently more work needs to be done.

Case citation: State v. Decker, 2017 WL 1833239 (Minn. Ct. App. May 8, 2017).


Source: Eric Goldman Legal

Reminder: The FTC Punishes Influencers That Don’t Disclose 0

social media influencer rulesHi there, social media influencer: noteworthy things are happenings in the e-commerce world! Make note: 1) the FTC put celebrity endorsers on notice, and 2) Amazon is rolling out a new social media “influencer” program. In this post, we’ll summarize the events and then review a few social media marketing legal “don’ts.”

FTC to Social Media Celebrities: We’re Watching You

After a consumer watch group applied some pressure, the Federal Trade Commission sent letters to 90 celebrity social media influencers. To paraphrase the message: Stop being tricky with disclosures. Truth-in-advertising rules apply! It’s against regulations to disguise that you’re getting cash-money to hawk products.

According to FTC regulations, any person with a “material connection” to a given product must “clearly and conspicuously disclose relationships to brands” when promoting.

Don’t Try To Bury or Hide Disclosures

Hiding disclosures is also a no-no. Compliance requires that all declarations be made before the “more” button, to accommodate diminished screen real estate on cell phones.

The FTC’s action marks the first time the commission directly reached out, with unsolicited guidance, to celebrity endorsers. So far, no measures have been taken. However, if any of the letter recipients continue to flout guidelines, they’ll most likely be slapped with a gigantic fine.

When asked why it chose to focus on this issue, a spokesperson from the advocacy group explained:

“Instagram has become a Wild West of disguised advertising, targeting young people and especially young women. That’s not going to change unless the FTC makes clear that it aims to enforce the core principles of fair advertising law.”

Amazon’s Influencer Program: Do You Know The Rules?

In Amazon’s manifest destiny quest to claim all things retail, as of late, the company has been concentrating on fashion. And, like most style brands, the e-commerce behemoth is enlisting social media influencers to market and promote.

Still in its beta phase, the program is “invitation only” — and according to Amazon, participants don’t have a say in product selection.

So, who is Amazon asking to join this Amazon influencer promotional hive? According to reports, the company considered “various factors, including but not limited to number of followers on various social media platforms, engagement on posts, quality of content and level of relevancy for Amazon.com.”  Amazon was also sure to clarify that “[t]here is no set cut-off and influencers across all tiers and categories are represented in the program.”

Social Media Marketing Crib Sheet

So, what legal issues must Amazon influencers consider when promoting products? What disclosure tactics don’t pass FTC muster? Here’s a quick list.

  • Don’t bury disclosures in a long string of hashtags. The Federal Trade Commission considers it deceptive.
  • Don’t use #sp (for sponsored) or #partner as the only disclosures. They’re not clear enough.
  • Don’t use #Thanks [Brand] as a disclosure. The phrase does not meet FTC truth-in-advertising standards.

Click here for a more in-depth list of social media marketing dos-and-don’ts.

Contact An E-Commerce Business Consultant

If you’re an Amazon influencer or social media promoter with questions for an attorney who handles online marketing issues, get in touch. Our team has helped hundreds of online business entrepreneurs with everything from affiliate marketing contracts to FTC investigations. Our rates? Exceptionally reasonable. Our knowledge-bank? Invaluable. Let’s chat; we have the answers and know-how you need.

The post Reminder: The FTC Punishes Influencers That Don’t Disclose appeared first on Kelly / Warner Law | Defamation Law, Internet Law, Business Law.


Source: Kelly Warner Law

What the Bleep Do We Know? How the Use of Volunteer Moderators, LiveJournal, and the Saga of Bleeping Computer Continues to Shape the Internet 0

Social networking sites and online message boards are an integral part of the Internet. Those websites are able to provide unfettered forums for the exchange of ideas because they enjoy certain legal immunities. But, those immunities are eroding.

The Digital Millennium Copyright Act (DMCA) provides immunity from copyright infringement, and 47 USC § 230, commonly known as Section 230, applies to all other content. These laws essentially say that a service provider is not a “publisher” of third-party content on their websites, and thus not liable for what their users do.

But, some recent cases have chipped away at this immunity. Recently, the Ninth Circuit Court of Appeals ruled that LiveJournal may not be protected by the DMCA safe because LiveJournal uses volunteer moderators. The logic? The Ninth Circuit found that common law agency theory had been short shrifted by the district court, and now the issue may be in the hands of a jury.

Similarly, the Southern District of New York denied a motion to dismiss in Enigma v. Bleeping last year, finding that the complaint over a “defamatory review” plausibly alleged that a volunteer moderator could be an implied agent of the website. The court entertained the idea that a volunteer “superuser” – who was not an employee–could be an agent of Bleeping Computer, which would have made the protections of the Section 230 of the Communications Decency Act (“Section 230”) inapplicable.

LiveJournal and Online Communities

One of LiveJournal’s more popular communities (with 52 million views per month) is a celebrity gossip group called “Oh No They Didn’t” (“ONTD”), and naturally, pictures of celebs abound on ONTD. This led LiveJournal to hire a paid moderator to work with the volunteer moderators for the group, to maximize advertising revenue.

Mavrix Photographs accused ONTD of posting their celebrity photos and sued LiveJournal directly for copyright infringement, under the theory that the volunteer moderators were agents of LiveJournal. Although the district court rejected this argument, the Ninth Circuit’s reversal is harsh.

Who Uses Moderators?

Here’s where it get’s tricky. Many websites (Facebook included) use moderators to maintain cohesion within an internet community. Moderators are generally people in charge of managing comments for a website and they are usually volunteers. Under the common law doctrine of agency theory, an “agent” is someone who is authorized to act on behalf of another, called a principal Usually, an agent is a paid employee of a company, but a paycheck is not always the deciding factor.

In the Bleeping Computer case, the court seemed open to the notion that moderators who volunteered for Bleeping Computer could be implied agents.

These decisions mean that courts are now dissecting the distinctions between publishers, third parties, and agents, specifically focusing on websites that use moderators and volunteers to curate some of the content.

What Website Owners and Service Providers Need to Know

New businesses usually weigh risks and liability when developing a model. if a website allows a third party to publish content, there are two main areas of law service providers should be aware of: 1) intellectual property infringement and 2) defamation law.

Intellectual property infringement is when someone uses a copyright, patent, or trademark without permission. One example would be posting a photo of a model without permission from the photographer. Defamation is a statement or comment that damages someone’s reputation.

Section 230 protections versus DMCA “safe harbors”

Section 230 says that service provider shall not be treated as the publisher or speaker of any information posted by a third party. This distinction between a “publisher” and a “service provider” took root in early defamation case law, which compared defamatory statements in newspapers with defamatory statements spoken on the telephone.

If you have ever read a comments section on a website or used your thumb to scan tweets, you have probably come across lots of pictures of celebrities (copyright infringement?) and harsh comments that could “damage someone’s reputation.” Many of these tweets likely violate copyright law or could be defamatory. Websites are generally immune from liability for these posts or comments because of §230 and DMCA §512 safe harbors.

The courts surmised that since newspapers had editors, they exercised more control over the publication, thus newspapers could be held liable for a defamatory statement. Telephone companies on the other hand were immune from defamation liability because they were merely passive conduits for third party expression. Section 230 codifies the spirit of this premise: that website owners who allow users to generate content are usually mere conduits of expression, thus website owners tend to be protected from defamation and copyright infringement suits related to third-party content.

DMCA §512 extended protections to service providers against intellectual property infringement liability, by establishing “safe harbors” as long as there are effective notice-and-takedown procedures and the service provider has no knowledge of the infringing materials.

Using a Moderator Throws a Wrench Into the Works

This line between who is a publisher and who is a “mere conduit of expression” seems neat and tidy on paper, but the internet by nature is not neat and tidy, so if your website uses a moderator to curate content, it throws a wrench into this tidy classification.

What is noteworthy about the LiveJournal case is that the Ninth Circuit did not specifically differentiate between the acts of the paid moderator and the volunteer moderators, so one could interpret that this decision means that even websites with 100% volunteer moderators could be liable if the volunteers are deemed to be agents. The Ninth Circuit held that the acts of the moderators could be attributed to LiveJournal because it was not clear if the photos were truly “posted at the direction of the user,” or if LiveJournal itself posted the photos.

Takeaways

So what is the takeaway? Courts may be more willing to chip away at §230 and DMCA §512 protections if a website uses a moderator, so website owners need to rethink their models. Since “moderator jurisprudence” is still developing, there are no black and white answers yet.

Here are a few points to consider:

  • The difference between a “publisher” and a “mere conduit of expression” should be the guidepost a service provider uses – is the website like a telephone company that allows people to speak freely or is the website more like a newspaper were editors (maybe even a volunteer moderator) exercises some control?
  • If the service provider chooses to allow a volunteer moderator to curate content, the company should carefully analyze their relationship to the moderator. Does a representative from the company communicate with the moderator or does the moderator have free reign to “moderate” as they see fit?
  • What do the websites terms of use say about its moderators? Remember, courts will heavily analyze terms during litigation. If the Terms omit talking about moderators, it could be as damaging as implying the moderators are agents or employees.
  • Is the website’s moderator a paid employee? Service provisders and website owners may want to analyze their contracts with paid moderators and decide if additional ad revenue is worth losing §230 and DMCA protections.
  • If a service provider decides that having a moderator is worth losing §230 and DMCA protections, it is probably wise for service providers to invest in additional training for their moderators, which could include merely implementing new policies.

Regardless, anyone who runs a website with moderators that relies on §230 and DMCA protections, even if the moderators are volunteers, should review their current policies about moderators, Courts have not looked kindly on websites that use moderators lately, so a little investment now beats a lawsuit later. Agency theory may be a new quiver in the plaintiff attorney’s toolbag, and website operators must be careful about how they interact with moderators.

The post What the Bleep Do We Know? How the Use of Volunteer Moderators, LiveJournal, and the Saga of Bleeping Computer Continues to Shape the Internet appeared first on Randazza Legal Group.


Source: Marc Randazza

The “Pop-Up Keyboard” Defense: How Apps, Mobile Phones, and Terms of Use Work Together 0

Unlike the Blackberry keyboards of yesteryears, your smartphone probably has a pop-up touch-screen keyboard. Recently, the Northern District of California took issue with this modern design, and denied Uber’s motion to compel arbitration because a pop-up keyboard blocked Uber’s Terms and Conditions from view while a user registered for the app. This case is unique because it focused on the functionality of how online terms of use appear on a touch screen mobile phone.

Metter sued Uber over a $5 cancellation fee in a putative class action and Uber filed a motion to compel arbitration, pursuant to its terms of use. Metter presented four different arguments to the court to show how he could not have agreed to the arbitration clause in the terms. The first three were classics of contract law disputes, that 1) the alert on the screen was not clearly visible, 2) that the alert was confusing and did not put him on notice of the arbitration clause, and 3) that the alert was unenforceable because he did not assent to (in other words, he claimed it was browsewrap).

The court set these arguments aside because Uber requires users to “click” to assent to the contract, thus Uber’s terms are clickwrap rather than browsewrap. Courts consider terms of service “browsewrap” if the terms are tucked away in a link at the bottom of the webpage. Browsewrap is generally unenforceable because the consumer did not “click” to agree. Clickwrap is the opposite – a box pops up and the consumer must click to agree. Courts usually find that clickwrap is enforceable because the consumer took a physical step to agree to the terms by “clicking” on the box.

Here, however, even though the Court determined that Uber’s Terms of Service is Clickwrap, the Court observed a different issue with the Terms. Metter’s fourth argument, arguably a pretty unique one – caught the court’s attention: when the keypad popped up for Metter to enter his credit card information, it blocked the terms of service alert, preventing him from seeing it, thus preventing him from agreeing to the terms. Since Metter did not see the terms because of the pop-up, he could not have agreed to them. Thus, the Court dismissed Uber’s motion to compel arbitration.

Design Matters

The Court carefully notes that this case does not mean that someone who does not see or read the terms of use is off the hook. Here, the design of how the cell phone screen displayed the alert made the difference. Upon landing on the screen, the first box asks for credit card information, and the moment one touches the box, the keypad immediately appears. The terms of service were at the bottom of the screen, so a consumer would have no reason to scroll down to the bottom of the screen to see the alert before entering the credit card information.

From the opinion:

“For one thing, Uber never explains why Metter would have scrolled down to find a terms of service alert he was not otherwise aware of, especially when the registration and payment screen neither instructed him to scroll down nor presented any reason for him to do so. Moreover, although it is true that the terms of service alert would have been visible to Metter when he first reached the payment and registration screen, it would have been obscured immediately when Metter pressed any field asking for his credit card information.”

The Court also notes that the screen seems designed to indicate that a consumer should enter their credit card information right away without scrolling down, because the credit card field is at the top and the terms of service are at the bottom.

“As these fields are at the top of the screen, and entry of payment information is one of the primary purposes of this page, the Uber app essentially prompts a user to enter his credit card information as soon as he reaches the payment and registration screen. As a result, an ordinary registrant will often be compelled to activate the pop-up keyboard and obscure the terms of service alert before having the time or wherewithal to identify other features of the screen, including the alert.”

Finally, the Court ends its analysis with the foundations of contract law, that no contract can exist if there is no acceptance of the contract.

“When such a registrant presses “REGISTER” without having seen the alert, he does so without inquiry notice of Uber’s terms of service and without understanding that registering is a manifestation of assent to those terms. Although the terms of service alert seems designed to put a registrant on inquiry notice of Uber’s terms of service and to alert the registrant that registration will amount to affirmative assent to those terms, the keypad obstruction is a fatal defect to the alert’s functioning.” (emphasis added.)

It’s interesting that the court relied on the keypad obstruction since arguably, the keypad obstruction is out of Uber’s direct control. Should lawyers start thinking like designers and advising clients how to design apps to alert users to terms and conditions? This decision indicates that lawyers should.

How will the display of terms and conditions continue to change as our devices change – how will terms and conditions be displayed on smart watches or VR headsets? It depends on how the consumer interacts with the device to assent to the terms.

Native Apps versus Web-based Apps

An important distinction in this case – although the court does not mention this distinction specifically—is that the functionality question likely turned on the fact that the Uber App is a native app rather than a web based app. A native app is one that a user has to download onto their phone. It’s easy to incorporate clickwrap into a native app because a programmer can include an opening screen that requires the user to click to agree.

A web based app is the mobile view of a website when accessing a website through a mobile phone. It is a little more difficult for programmers to incorporate click wrap into a web based app, since pop-up screens are not well suited for mobile internet browsers.

Here, this case involved a native Uber app – but the fatal flaw is that the terms of use alert was at the bottom of the mobile screen. Uber should have expected users to have a pop-up keyboard while using this app, and could not have expected Users to scroll down to view the alert before entering their credit card information.

Takeaways

Focusing on the phones of today for a moment, let’s pull out some key pointers on developing terms and conditions for apps, applying some of the wisdom from this opinion.

  • First, the terms and conditions alert should always be visible before a user clicks “register” or “agree.” Web designers should put the terms and conditions alert at the top of the screen – before the user enters their credit card information.
  • A marketer might push against this idea, and argue that people will leave the website if they are forced to look at a Terms of Use agreement – but remember, when someone uses an app, it creates a contract, so website and app owners will be better off in the long run if the contract is enforceable.
  • When developing web apps or native apps, the app owner should test the web app on an actual mobile phone (rather than a website that mimics the “look” of a mobile phone) because the owner must understand how a pop-up keyboard will interact with the web app.

As modern technology evolves, lawyers, website owners, programmers, and designers must evolve with it. What has worked in the past as far as web design and modern day “contractual agreements” may not work on tomorrow’s hardware.

The post The “Pop-Up Keyboard” Defense: How Apps, Mobile Phones, and Terms of Use Work Together appeared first on Randazza Legal Group.


Source: Marc Randazza