Esports Visas: An Introduction to P-1 Visa Legal Issues For Professional Gamers 0

esports visasIn the not too distant past, Asia was all about esports; but in the U.S., professional gaming was in the purview of Pepper Brooks and the gang over at ESPN 8, The Ocho. 

Lately, however, esports is surging in the United States. Companies are sponsoring tournaments with sizable purses; leagues like the NBA and NFL are forming corresponding esports clubs. Proof: Prizes at the 2016 International Dota 2 Championship weighed in at $20 million — almost double the total payout of The Masters golf tournament.  In short, the professional gaming economy is cruising upwards, at warp speed.

But something is vexing international esports athletes: Work visas to compete in U.S. esports tournaments.

Esports Visas in the United States: P-1

Esports continues to exist in a legal gray zone. For example: Are esports professionals considered “athletes” and can they qualify for P-1 visas (which are needed to legally participate in stateside esports tournaments)?  The process to acquire a P-1 visa is not altogether difficult, but it’s also an inconsistent process. For example, there have been stories of esports athletes who were green-lit for 2016 tournaments, and then denied in 2017. There was also chatter about someone who was accepted in April for a tournament, and then rejected in August for another.

The P-1 visa system is unpredictable and inconsistent, which makes competing in the US difficult for many players.  Visa applications can take months to be approved. Moreover, since there is no hard-and-fast rule on whether or not a professional gamer qualifies as an athlete, the decision sometimes falls to the opinion of one government worker.

Proving “non-immigrant intent” is the first hurdle to securing a P-1 visa.  Applicants must demonstrate that they have permanent employment, relevant business or financial connections, or familial ties in their country. Why? Because officials wants to make sure that P-1 participants go home after their visas expire.

Visas For Professional Game Players: Esports v. Chess Community

The situation faced by many esports athletes got us thinking: How does it work for chess tournaments?  After all, the US hosts many chess events, with participants from every corner of the globe.

We discovered that the US Chess Association offers invitations to foreign players, which streamlines the process for acquiring a P-1 visa.

Video game players, on the other hand, have to get a US employer to obtain an approved PQ petition from the United States Citizenship and Immigration Services (USCIS).  Once officials approve the request, the player can then apply for a P-1 visa.  Going through this paperwork process, multiple times a year, to compete in a handful of tournaments, can be exceptionally frustrating.

With the esports industry skyrocketing, both at home and abroad — not to mention rising viewership — the industry must figure out a way to make it easier for the world’s best video game players to compete on the US stage. If not, the U.S. esports programs may not be able to become contenders.

***

Kelly / Warner works with esports athletes and teams on various business and legal issues — including esports visas, contract negotiations, and other business logistics. Questions? Please get in touch.

Article Sources

New, C. (2017, May 18). Immigration In Esports: Do Gamers Count As Athletes? Retrieved June 20, 2017, from https://www.forbes.com/sites/allabouttherupees/2017/05/18/immigration-in-esports-do-gamers-count-as-athletes/#5fae6a03468e

The post Esports Visas: An Introduction to P-1 Visa Legal Issues For Professional Gamers appeared first on Kelly / Warner Law | Defamation Law, Internet Law, Business Law.


Source: Kelly Warner Law

Digital Insights Expert 0

← Back to Careers

Digital Insights Expert

Saskatoon, SK, Canada

Our growing Digital Advertising team is looking for a Digital Insights Expert to add to the growing team. This role will manage analytics by tracking, measuring, analyzing and reporting insights into the campaigns. The Digital Insights Expert will be required to understand the goals and objectives of the client and implement a campaign strategy for the client’s success. This position will be in direct communication with clients, partners and the internal teams. Customer service is at the core of our business, so this candidate will need to have amazing communication skills, experience in Digital Advertising and be able to converse with the client to understand and sometimes help develop their goals.

Skills and Qualifications:

  • 2 years of Google Analytics experience
  • Certification in Google Analytics and/or Google AdWords
  • Understanding of Google Tag Manager
  • Understand website optimization, testing and targeting
  • Desire to work in a small team and achieve results
  • Very strong computer and internet skills
  • Ability to work independently and quickly, but precisely and meet all deadlines
  • Creativity is a plus, Analytical is a must
  • Experience working in a fast paced environment and the ability to manage multiple accounts simultaneously.

Responsibilities:

  • Audit, implement and optimize Google installations on both internal landing pages as well as external advertiser website.
  • Develop optimization strategies used in campaigns.
  • Determine, based on data, what advertising tactics (facebook, adwords, etc) are working and which are not.
  • Monitor campaign effectiveness and make recommendations to customers that result in effective advertiser solutions
  • Interpret digital campaign reports and account performance data.
  • Develop and provide customer focused communications on campaign performance including, effectiveness, targeting and strategy.
  • Liaise with web production, sales, marketing and analytics regarding planning and prioritization to assist in execution of the campaigns
  • Work collaboratively with the sales team to recommend campaign modifications for optimal results
  • Cultivate positive professional relationships with clients, vendors and internal teams

Why Vendasta

Vendasta’s platform empowers agencies and media companies to grow their sales of marketing solutions for small and medium-size businesses. Our system identifies hot leads who are interested in the products you offer and allows you to provide scalable tools at the right price and service model when businesses are ready-to-buy.

Learn more about Vendasta

The post Digital Insights Expert appeared first on Vendasta.


Source: Vendasta

Ban on Sex Offenders Using Social Media Violates First Amendment–Packingham v. North Carolina 0

Yesterday, the Supreme Court struck down a North Carolina law that banned registered sex offenders from using social media sites. It’s a rare treat to get a Supreme Court opinion delving into Internet content regulations, and as a bonus, this case enthusiastically embraces Internet exceptionalism. As Justice Alito’s concurrence says plainly, “Cyberspace is different from the physical world.” Whoa!

The Court’s Ruling

North Carolina G.S. §14–202.5 makes it a felony for registered sex offenders to “access a commercial social networking Web site where the sex offender knows that the site permits minor children to become members or to create or maintain personal Web pages.” The definition of “commercial social networking” site has four attributes:

* the site operator derives revenue (including ad revenue)
* the site “facilitates the social introduction” of people
* site users can create web pages or personal profiles
* the site provides users a mechanism to communicate with each other

The law excludes sites that (a) provide “only one of the following discrete services: photo-sharing, electronic mail, instant messenger, or chat room or message board platform,” or (b) have as their ‘primary purpose the facilitation of commercial transactions involving goods or services between [their] members or visitors.’”

The law applies to 20,000 North Carolinans, and the state has prosecuted over 1,000 violators.

The Supreme Court unanimously declares the NC statute unconstitutional. Justice Kennedy wrote a five-judge majority opinion. Justice Alito wrote a three-judge concurrence.

Justice Kennedy says the law couldn’t survive intermediate scrutiny (if that’s even applicable instead of strict scrutiny) because it’s “unprecedented in the scope of First Amendment speech it burdens.” The state argued that the law was necessary to protect against sex offender recidivism, but the court says the law is much more restrictive than an analogous buffer zone in physical space.

Justice Alito criticizes Justice Kennedy’s opinion because of its “undisciplined dicta” and “unnecessary rhetoric.” He prefers analyzing the law as a content-neutral “time/place/manner” restriction. The government has a compelling interest in protecting children from sex predators, but the restrictions are overbroad:

[the law’s] wide sweep precludes access to a large number of websites that are most unlikely to facilitate the commission of a sex crime against a child….the North Carolina law has a very broad reach and covers websites that are ill suited for use in stalking or abusing children.

Implications

i-love-social-media* Social media lovefest. Raise your hand if you love social media. The Supreme Court agrees with you!

The majority opinion celebrates the ascendancy of social media sites in our society. Justice Kennedy asks what is “the most important places (in a spatial sense) for the exchange of views”? He answers: “today the answer is clear. It is cyberspace—the ‘vast democratic forums of the Internet’—in general and social media in particular.” This implies that social media sites have joined, or even supplanted, traditional public fora as streets and parks.

Justice Kennedy’s opinion also endorses the role of social media sites as vital information resources. Echoing one of the remarkable statements from Reno v. ACLU, Justice Kennedy says: “social media users employ these websites to engage in a wide array of protected First Amendment activity on topics ‘as diverse as human thought.’” He continues:

Social media allows users to gain access to information and communicate with one another about it on any subject that might come to mind. By prohibiting sex offenders from using those websites, North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.

This is plainly true, and it’s great to see the Supreme Court recognize how important social media has become. However, does this passage signal that access to social media sites has become a fundamental right? In other words, if social media sites are (superior?) substitutes to streets and parks, should citizens be equally guaranteed the right to access them? In general, privately operated websites aren’t considered public fora for First Amendment purposes (though government-operated social media accounts may be–more on that in a moment). This passage might encourage courts to think about these questions more holistically.

* Internet Exceptionalism. Just like the Supreme Court did in 1997 in Reno v. ACLU, the judges embrace Internet exceptionalism (see, e.g., Justice Alito’s “Cyberspace is different from the physical world”). However, the majority and concurrence disagree about whether the Internet’s differences are good or bad.

Like Justice Stevens did in 1997, Justice Kennedy emphasizes the Internet’s positive aspects:

While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be. The forces and directions of the Internet are so new, so protean, and so far reaching that courts must be conscious that what they say today might be obsolete tomorrow.

In contrast, Justice Alito emphasizes how the Internet can uniquely facilitate sexual predation:

Several factors make the internet a powerful tool for the would-be child abuser. First, children often use the internet in a way that gives offenders easy access to their personal information—by, for example, communicating with strangers and allowing sites to disclose their location. Second, the internet provides previously unavailable ways of communicating with, stalking, and ultimately abusing children. An abuser can create a false profile that misrepresents the abuser’s age and gender. The abuser can lure the minor into engaging in sexual conversations, sending explicit photos, or even meeting in person. And an abuser can use a child’s location posts on the internet to determine the pattern of the child’s day-to-day activities—and even the child’s location at a given moment….

First, it is easier for parents to monitor the physical locations that their children visit and the individuals with whom they speak in person than it is to monitor their internet use. Second, if a sex offender is seen approaching children or loitering in a place frequented by children, this conduct may be observed by parents, teachers, or others. Third, the internet offers an unprecedented degree of anonymity and easily permits a would-be molester to assume a false identity.

(BTW, Justice Alito makes these factual assertions without any empirical support. Indeed, I think Justice Alito overstates these factors, in some cases by a lot).

Justice Kennedy partially retort to Justice Alito by noting that the Internet, like all new technologies, can be used for both good and evil: “For centuries now, inventions heralded as advances in human progress have been exploited by the criminal mind. New technologies, all too soon, can become instruments used to commit serious crimes. The railroad is one example. So it will be with the Internet and social media.”

* What is “social media”? I’ve repeatedly complained that the term “social media,” as a subcomponent of the Internet generally, cannot be defined rigorously. As Justice Alito admits: “it is not easy to provide a precise definition of a ‘social media’ site.” Statutes that have attempted to define “social media” are fatally ambiguous or over-inclusive (or both). For example, California’s employee social media privacy law defined “social media” as all electronic content, both online and off, i.e., including files on a user’s non-networked hard drive.

This opinion highlights the illogic of trying to segregate social media from the rest of the Internet. Justice Alito shows how the NC statute unintentionally (?) swept in Amazon.com’s review functionality, Washington Post’s comment section, and WebMD (I believe these examples came from amicus briefs).

In my Internet Law casebook, I include People v. Lopez, where the court found that the definition of “social media sites” (in the context of probation conditions) was “reasonably certain.” After this opinion, will courts be as confident about the precision of any definition of “social media”?

* What will happen to other states’ sex offender restriction laws? Many other states have restricted registered sex offenders from using social media sites, though the specific laws differ. How does this ruling affect those laws?

Justice Kennedy says that states should be able to “enact specific, narrowly tailored laws that prohibit a sex offender from engaging in conduct that often presages a sexual crime, like contacting a minor or using a website to gather information about a minor.”

Justice Alito expresses skepticism about this possibility. He says: “if the entirety of the internet or even just “social media” sites are the 21st century equivalent of public streets and parks, then States may have little ability to restrict the sites that may be visited by even the most dangerous sex offenders.” I think Justice Alito is overly pessimistic. States will keep trying to ban sex offenders from social media, and the Supreme Court will surely endorse some of those efforts.

The bigger problem is enforceability of any laws that pass Justice Kennedy’s standard. Imagine a state bans a registered sex offender from attempting to communicate with a person known to be minor, either in a private message or in a message directed to the minor (like a tagged message on Facebook or a Twitter @reply). I think this law would survive constitutional scrutiny, but how will the state proactively monitor compliance? If it’s a private message or the user’s account is private, state enforcers won’t see it at all (unless they log into the offender’s account); and even if the state enforcer sees the tweet or tagged Facebook post, it will take additional work to determine the message was sent to a minor instead an adult (assuming the age information is even determinable). So the state’s job at monitoring possible sex offender recidivism will become more difficult.

[Historical note: the Reno v. ACLU case had a concurrence/dissent by Justice O’Connor that talked about the possible permissibility of “Internet zoning” and the differences between one-to-one online communications and online messages that are broadcast to the world. The discourse in this opinion brought to mind both of these issues, but sadly neither opinion discussed Justice O’Connor’s opinion.]

* Will this opinion affect probation conditions? As illustrated by the Lopez case, judges routinely restrict Internet usage in probation conditions. (I recently blogged about this issue in the In re Mike H. case). Does this opinion signal possible limits to such probation conditions? Justice Kennedy says it’s “troubling” that NC’s law “imposes severe restrictions on persons who already have served their sentence and are no longer subject to the supervision of the criminal justice system” (emphasis added). The italicized language suggests that Justice Kennedy did not want to reach the probation conditions issue. However, he also says: “It is unsettling to suggest that only a limited set of websites can be used even by persons who have completed their sentences. Even convicted criminals—and in some instances especially convicted criminals—might receive legitimate benefits from these means for access to the world of ideas, in particular if they seek to reform and to pursue lawful and rewarding lives.” Thus, I think this opinion provides substantial grounds to carve back or eliminate probation conditions restricting social media and Internet usage, and I expect we’ll see a groundswell of cases in the area.

* Can Trump block Twitter followers? Our president has blocked numerous followers from his @RealDonaldTrump Twitter account (which I suggest he rename to @ReallyThinSkinnedDonaldTrump). See a selection of the blockees. Is it legal for him, as a government official, to block his constituents from following him? If the account were his personal account, the answer might be yes; but even the White House has admitted that @RealDonaldTrump is an official account. And blocking a follower can suppress their speech; at minimum, it blocks an @reply from showing up in the message thread. Thus, if a Twitter account is a limited public forum, blocking accounts will likely violate the users’ free speech rights.

This opinion doesn’t directly address the issue, but it does say that “on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose.” As part of discussing social media’s ascendancy as the modern quintessential public forum, the court is signaling that suppressing constituent speech on social media would be impermissible censorship. I imagine the lawsuit challenging Trump’s Twitter blocking will highlight this passage.

Venkat’s comments: I love the Court’s rhetoric around the vast possibilities made available by sites such as Facebook, Twitter, and LinkedIn. Justice Kennedy talks about the “nature of revolution in thought” unfolding on Facebook, LinkedIn, and Twitter. *Record Scratch* *Freeze Frame* As always, judges are a bit behind the times, and this is no exception.

The majority opinion is imbued with forum language. I have not re-read Reno v. ACLU recently, but to the extent this is new, it’s noteworthy. Perhaps the Court’s language has embraced forum terminology as social networks have evolved? Appeals courts have not definitively addressed the status of public Facebook pages as fora, and this language suggests that they could be limited public forums at the very least. The same goes for the President’s Twitter feed! (As Eric notes, the majority opinion has some language that’s relevant to the question of whether the President may block Twitter users.)

A fun exercise when reading a ruling like this is to think about ways the legislature could accomplish the result that the judges thought was appropriate. The law got derailed in its overly broad definition of social networking site, and both sets of judges were troubled that the definition could include sites such as Amazon, the Washington Post, and WebMD.

Case citation: Packingham v. North Carolina, No. 15–1194 (U.S. Sup. Ct. June 19, 2017).


Source: Eric Goldman Legal

5 Kinds of Facebook Content for Your Bowling Alley 0

Social media marketing tips to grow your business

If you aren’t using Facebook for your bowling alley, now is the time to start! Every day, consumers are looking online for a local business just like yours. At Main Street Hub, we’re here to help you stand out among the competition and discover your online community.

By keeping your business’ Facebook page full of exciting content, you will be able to get new customers in the door, build relationships with your existing customer base, and stay top-of-mind with the bowling enthusiasts in your area.

Don’t have time to spare for social media marketing? Let Main Street Hub do it for you!

Here are 5 kinds of Facebook content to strike up a conversation with your online community:

1. Behind the Scenes Photos

Your Facebook content has the power to get your followers more familiar with and excited about what your business offers to the community — like good times, a friendly team, and awesome customers.

By showcasing your business in a creative way, you’ll remind your fans and followers why they love spending time at your alley.


Streamwood Bowl showcasing one of their newest customers!

2. Testimonials


Matador Bowl showed off some of their great feedback, and let their Facebook followers know that they’re on Yelp!

Showing off your best online reviews and testimonials will serve your bowling alley in a couple different ways.

When a new potential customer looks up your business on Facebook, they’ll be able to see some of your awesome customer feedback. This puts your business’ best foot forward by showing everyone who checks you out online just how loved your business is in the community and how happy your customers are.

Testimonial posts also act as cross-promotion for your review platforms. By showing gratitude to your previous reviewers, your followers will know where to leave their reviews and how much they mean to you and your business — making them more likely to leave a great review!

3. Educational Content

https://medium.com/media/19a906077255e9030926ff39670c0b72/href

Remember, most people following your business on Facebook probably love to bowl. Think about the kinds of questions your customers might want to know before visiting your business and allow that to inform your Facebook content.

Keeping your Facebook feed full of educational content will provide value to your followers and build trust with new and potential customers by showing that you are an expert in your industry and enthusiastic about all things bowling.

4. Entertaining Content

What does every alley cat love to see? Fun bowling content, of course! Give the people what they want by sharing simple, fun content like bowling puns and jokes.

Entertaining Facebook content is shareable. Every time one of your fans shares a post from your business, that content gets in front of new people — people who might be looking for lanes just like yours!


Millennium Bowl gave their followers a silly reminder to stay safe this summer.

5. Questions


Check out all those recommendations for Maple Lanes!

Posting questions relevant to your industry is an excellent way to get your customers engaged in conversations online. Try polling your customers about their high scores or quizzing them on bowling trivia!

Not only will you be able to create engagement by getting people to comment their answers to your questions, but you will stay top-of-mind with your new and existing customers by getting them to check back for the answer.

Get your social media pages out of the gutter! Let Main Street Hub do it for you. Click here to get started.

Follow us on Twitter, Facebook, LinkedIn, and Instagram!



5 Kinds of Facebook Content for Your Bowling Alley was originally published in Main Street Hub on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: Main Street Hub

JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience 0

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?


Fundamentals

What is JavaScript?

When creating a modern web page, there are three major components:

  1. HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
  2. CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
  3. JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).


Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

  1. Crawlability: Bots’ ability to crawl your site.
  2. Obtainability: Bots’ ability to access information and parse your content.
  3. Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

  1. Blocking search engines from your JavaScript (even accidentally).
  2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

  • Not recommended:

    • The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
    • Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
      • Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
      • Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image result

Image source

  • Recommended:

    • pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.

      • A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
      • Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
      • Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

  • If the user must interact for something to fire, search engines probably aren’t seeing it.

    • Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
  • If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
    • *John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
    • *Screaming Frog tests show a correlation to five seconds to render content.
    • *The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
  • If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.

How to make sure Google and other search engines can get your content

1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

  1. Confirm that your content is appearing within the DOM.
  2. Test a subset of pages to see if Google can index content.
  • Manually check quotes from your content.
  • Fetch with Google and see if content appears.
  • Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
  • Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTS
What are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
Google Developer AJAX Crawling FAQs

Benefits

Despite the considerations, HTML snapshots have powerful advantages:

  1. Knowledge that search engines and crawlers will be able to understand the experience.

    • Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).
  2. Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
    • Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

  1. Inline: Add the JavaScript in the HTML document.
  2. Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
  3. Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation


TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz