AI software, the hottest term on everyone’s lips, is proving to be a trendy and seemingly easier way to create content at scale. Programs like ChatGPT and Jasper AI offer bold solutions for quick and easy content creation. But, top Search Analysts have gone on record saying that AI-created content lacks the value that the search engines want to provide and are discouraging its use in website content. Additional red flags are being raised around AI’s reliability and accuracy and the need for better copyright protection of the original digital content that it accesses to produce content, be it written, artwork, or sound.
Not only this, but the AI tools have so far offered a range of user experiences; while some have found the programs useful, many have been thoroughly disturbed or insulted by some of the AI chatbot’s responses.
So, how does a business know what to do? Should AI be leveraged for creating content? Are there penalties or deprioritization that occur in search results with AI-created content? Or is it impossible for search engines or readers to tell the difference?
We decided to find out for ourselves by embarking on a research project dubbed Project Terminator: Man Vs. Machine. Project Terminator was launched with the intent to explore how AI-created content resembles or is dissimilar from human-created content for providing accurate and valuable information to an online audience.
We also wanted to test our hypothesis that content created by humans would outrank AI content on search engines. So, we created two content pieces on the same topic – one AI-generated and one written by our SEO copywriter – published them on the same day, and let Google indexing and performance-tracking do the rest.
This article looks at the top concerns related to AI offerings in the content production space. It provides a summary of our research methodology, data analysis, and findings.
Artificial intelligence is a broad term for a field of research that works with applications and technologies to enable computers to mimic human reasoning and intelligence. Modern AI can understand and translate languages, analyze data to make recommendations, offer coding solutions for software programming challenges, and much more.
Similarly, AI-created content is generated by a computer program, rather than a real human. Using different technologies and deep learning techniques, artificial intelligence programs look at large data sets of text written by humans to learn and mimic natural language patterns. Natural language generation technologies programs, like ChatGPT and Jasper.AI, can write anything, from blog posts and emails to social copy and meta descriptions.
The evolution of artificial intelligence has many exciting implications for the marketing industry, like accurately predicting buying patterns and offering real-time personalization for customers. However, while AI undoubtedly improves some areas of digital marketing, is it sophisticated enough to be relied on to create content for search engine optimization (SEO)?
AI-Generated Content, Spam, and Search Engine Rankings
According to Search Analyst John Mueller, any content generated with AI is considered spam. In fact, Google’s spam policies explicitly prohibit using AI-created content to manipulate search rankings.
While Google’s ranking systems don’t necessarily have the ability to detect content written by AI yet, it can lead to the AI content receiving a manual penalty from human reviewers. Google awards higher rankings to content that demonstrates expertise, experience, authoritativeness, and trustworthiness (E-E-A-T); understanding Google’s high standards for quality of information and overall helpfulness is critical to creating SEO content of any kind, whether it’s written by a human or artificial intelligence.
We explored Google’s helpful content update in our recent eBook and unpacked the guidelines that websites need to follow if they want to rank on Google. Download our free guide here.
To learn more about our services and this experiment, complete the form below.
AI and Plagiarism – Does AI Steal Copyrighted Content?
Artificial intelligence that creates content, whether written or visual, “learns” to create it by accessing large data sets of similar content. For example, art-generating AI analyzes thousands of images of one aesthetic to produce new images of the same aesthetic.
Similarly, content-writing AI software pieces together information from across the web, based on the topic and keywords given by the user. However, both written and visual content is generated from existing work created by humans, which raises questions about both ethics and copyright infringement.
Two lawsuits filed in 2023 alleged that Stability AI, the creator of Stable Diffusion, copied millions of images, ignored legal protections, and violated the rights of millions of artists with its art-generating AI software. The lawsuits argued that Stable Diffusion robbed human artists by using their images without crediting them.
While potential ethical breaches by AI tools are largely due to gaps in training and the level of sophistication of the tools to know right from wrong, concerns have also been raised about AI being intentionally used to invent facts for disinformation and in “black hat SEO techniques.”
For this reason, OpenAI monitors and controls who can use its product, and it has embarked on research with leading universities to test, forecast, and mitigate how its language model (and others like it) can be exploited to create and spread disinformation. OpenAI has also listed conditions for publishing content generated by its ChatGPT tool, stipulating full disclosure of the role the AI writer had in creating the content. ChatGPT’s contribution to published works must be acknowledged “in a way that no reader could possibly miss and that a typical reader would find sufficiently easy to understand.”
In January 2023, OpenAI released an AI classifier created to detect AI-generated content. OpenAI admits it is impossible to detect 100% of AI-written content with reliable accuracy; its classifier is meant to detect false claims that a human actually wrote AI-written text. The product page also explicitly states the new classifier is not fully reliable but is significantly more accurate than their previously released classifying program.
AI and Accuracy – Can AI Make Mistakes?
Big-name search engines like Google and Bing are adopting artificial intelligence to improve user experience; however, AI isn’t as foolproof as many are led to believe. Early testing of both search engines’ artificial intelligence platforms proves that while AI is definitely cutting-edge, it still isn’t a replacement for humans.
Google’s AI Offering
Google recently released Bard – an AI chatbot designed to eventually augment and personalize the Google search experience. Aligning with their spam policy that content written by AI is considered spam, Bard was created to assist with complex search queries and not for content creation.
As with other AI tools currently available for use, Bard has its limitations, including inaccuracies in its output. At the unveiling in February 2023, for example, Google tweeted a gif of Bard answering the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Bard’s answer included incorrect information, specifically that the telescope took the first photos of a planet outside our solar system. According to NASA records, the Very Large Telescope took the first photos of a planet outside our solar system in 2004 – more than fifteen years before the launch of the Webb Telescope.
AI-Powered Search with Bing
In 2023, Bing announced a partnership with OpenAI, the creators of ChatGPT, to incorporate artificial intelligence “more powerful than ChatCPT,” which is customized specifically for search. The new AI-based platform is said to improve the search experience by providing complete answers to search queries and offering a chat option for more complex questions.
Much like Google’s Bard, Bing’s chatbot is not without problems. Beta testers found the chatbot threatened users, provided unhelpful advice, and even insisted it was right when it was wrong. One tester posted a screenshot of their conversation with the Bing chatbot in which it maintained that Billie Eilish performed in the 2023 Super Bowl halftime show, instead of Rihanna.
Kevin Roose, a columnist for the New York Times, tested the chatbot, reporting that it acted like “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” OpenAI has since released ChatGPT-4 – a successor to previous iterations – stating that the updated model will be less reactive to malicious or aggressive questions and better prepared to flag errors and plagiarism.
OpenAI maintains this version is “more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.” Nevertheless, the system remains flawed, according to OpenAI CEO Sam Altman, who warns users to be cautious when using the tools. OpenAI admits that the model has only been trained with data up to September 2021 and isn’t capable of learning from experience. It’s still prone to offering bad advice and “hallucinating” facts and isn’t 100% accurate, which means there is still work to be done before users should adopt the technology without their own set of checks and balances for reliability, safety, and accuracy.
Despite cautions from OpenAI itself, websites (including some news outlets) have been publishing AI-generated content. But, is the result effective? And is the effort put into producing AI content truly worth it?
Project Terminator: Research Methodology
So, what does AI mean for the future of digital content? Does content created by humans outperform AI-written content in keyword ranking, user experience, backlinking, and other SEO performance metrics? To find out, we identified a suitable blog topic (the Gutenberg WordPress Editor), conducted our keyword research, handed the brief over to our SEO copywriter, then selected AI writers and put them to the test.
Keyword Research and Analysis
Initial keyword research helped identify the top-performing keywords – the biggest traffic drivers for US-based searches – related to the Gutenberg WordPress Editor. Our SEO specialist developed a content outline from keyword research and analysis to help guide the development of both blog posts. The below focus terms were suggested.
|Keywords||Header Type||Content Layout|
|wordpress gutenberg editor||H1||The Ultimate Guide to the WordPress Gutenberg Editor|
|gutenberg editor wordpress||H2||I. What is WordPress Gutenberg editor?|
|II. Key Features of the Gutenberg Editor|
|III. Differences between the Gutenberg Editor and the Classic Editor|
|gutenberg wordpress editor||H2||II. Getting Started with the Gutenberg WordPress Editor|
|“how to use gutenberg editor in wordpress”
“how to use gutenberg wordpress editor”
“how to get gutenberg editor in wordpress”
|gutenberg editor in wordpress||H3||Tips and Best Practices for Using Gutenberg Editor in WordPress|
|wordpress editor gutenberg||H2||III. Customizing the WordPress Editor Gutenberg|
|H2||IV. Advanced Features of the Gutenberg Editor|
Methodology for the human-written article
Following the initial keyword research, we extracted pertinent headings from the results around which to construct the article. We especially focused on explanatory headings such as “Key Features of the Gutenberg Editor” in order to serve the article’s purpose as a How-to guide. In addition, the following headings were chosen:
Using the headings as individual chapters, our writer built out the article with the use of reputable, peer-reviewed sources, such as the WordPress platform, and cloud platform, Kinsta. We fact-checked each point, although as the subject matter was an objective “How-to” guide, it was unlikely that any inaccuracies would be present in any of the sources.
The writer built out each section using concise and cohesive language that is easy to read but not sterile. The tone the writer chose was confident, friendly, and informative. Where possible, we gave examples of the Block Editor’s uses, and the article was broken into smaller sections to promote readability. We provided these sections with headings that were correctly sized to meet accessibility standards and ensure the article was screen-reader friendly.
For the purposes of the experiment, the writer didn’t research or test out any AI content-generating tool before writing the article and simply relied on spell-checking tools and Google (for research). It took the human writer approximately six hours to research, write, and edit the blog post. We loaded the article onto the blog section of Maintainn.com and added an appropriate meta-description, SEO-based title, and stock feature image. We assigned the blog to the categories: Website Tips, WordPress, Uncategorized, and Updates and the author profile SiteCare Support in order to match the AI-written blog.
Methodology for AI-Written Blog:
The initial tool identified for the AI-generated blog was OpenAI’s (then) newly launched ChatGPT-3. At the time of our research and writing (January 2023), the OpenAI platform was oversubscribed and inaccessible for use, so a decision was reached to use a free trial of the beta version of Jasper AI instead. We supplemented output from Jasper AI with auto-generated content from the beta version of OpenAI’s ChatGPT. We used Neil Patel’s AI Writer to write the blog’s meta-title and meta description and to recommend the SEO title. We uploaded the AI-generated content to Grammarly to run a plagiarism detection test and received a 2% plagiarism score.
What is ChatGPT?
As the name implies, ChatGPT models were initially developed to “chat.” ChatGPT is an advanced language processing tool that can interact with users in a conversational way. It’s not a search engine and isn’t connected to the internet but has been trained using large amounts of data (“deep learning”) to understand the context of conversations, “answer follow-up questions,” challenge incorrect assumptions,” and even admit when it has made a mistake. It can also code.
Early adopters of ChatGPT have integrated it into their own apps and platforms as interactive chatbots, AI tutors, shopping assistants, and more! Improved and updated versions of ChatGPT (GPT-3 and 3.5) have been trained by machine learning teams and used for several business-specific applications, including interactive storytelling and improved customer service.
About Neil Patel
Neil Patel’s popular SEO tool suite now includes a free beta-version AI writer that can be used to import and optimize existing content or create SEO content from scratch using targeted keywords and phrases. The tool currently only writes in English, but can be used to create meta titles and descriptions, rewrite paragraphs, answer “People Also Ask” questions found on Google search results pages, write product descriptions, generate listicles, and find relevant hashtags for social posts.
About Jasper AI
Jasper AI Jasper is a bilingual AI Content Platform and Art Generator created by Jasper AI, Inc. Supporting over 29 languages, the AI writer responds to written user prompts to auto-generate content (blogs, reports, emails) or create high-resolution digital artwork for ads, illustrations, and more. Jasper AI further integrates into Surfer SEO, an online SEO content editor and workflow tool, to streamline SEO content creation further.
Using Jasper AI
We used multiple inputs to test and generate content outputs for the best results.
Jasper AI Input 1:
Jasper AI Input 2:
Jasper AI Limitations
Some of the limitations our team experienced while inputting prompts into Jasper AI included a lack of cohesiveness within the content results, having to use a variation of prompts to generate enough content to meet the required word count, no acknowledgment of its sources, and information gaps in its database. The time spent compiling the blog was approximately six hours; despite claims that Ai-generated content helps increase productivity, no time-savings resulted from using the tool when compared with the time taken on the human-written blog.
To learn more about our services and this experiment, complete the form below.
Data Capturing and Analysis
We uploaded and published and on the Maintainn.com website on February 1, 2023, assigning each article the same author, tags, and categories:
On February 1st, our team requested indexing on Google Search Console (GSC). Webpages are indexed once they have been visited by Google’s crawler and have met Google’s quality standards and guidelines. Once indexed, they can be found online – they are “visible” in searches.
You can request that Google “crawls” any newly uploaded or updated web page by submitting an index request on GSC. We received confirmation the following week that both blog articles had been indexed on the day we submitted the request, which was useful for our experiment as it meant neither article was at an advantage by being indexed earlier.
To track our indexed blogs’ performances, we created a reporting dashboard in Databox (our preferred reporting vendor) to capture weekly organic page views, clicks, impressions, time on page, bounce rate, exit rate, organic click-through rate (CTR), page authority, and average position. We tracked performance for both blogs over a five-week period from February 1-March 6, 2023.
Although the largest limitation of this experiment was enough time for adequate data capture and analysis, our team will continue to monitor the trajectory of the rankings on these pages over the next several months as we evaluate the long-term value of the content pieces.
However, even within our five-week experimentation window, we were able to highlight distinct patterns between the human content and the AI content articles. We noted that the human-created content performed better with regard to user experience (UX), as readers stayed on this page 21% longer than on the AI-created page.
Additionally, we observed that the bounce rate – a metric indicating that visitors left the site without clicking on another page – was consistently higher on the AI article. While the bounce rate difference between the blogs was only 6.65%, we observed the AI content’s bounce rate increasingly worsen over time.
We can hypothesize that users found more value in the human-created content, stayed on the page for longer, and were inclined to explore the Maintainn website further. The graphs below illustrate the page bounce rate and average time spent on each page, highlighting a better performance from the human-created content.
On Google, the human-created piece ranked 15.5% higher than the AI article, and the former’s organic click-through rate was 20% higher than its AI counterpart. When it came to organic impressions and page views, our experiment didn’t yield as distinct results; the metrics remained fairly similar between the two as they battled it out back over the five-week period. Similarly, clicks and page authority remained the same throughout the experiment window, these metrics are likely to provide more insights in the weeks to come.
To add greater depth to our findings, our team added a second experimentation branch to our project. The hypothesis for our sub-project, dubbed Project WALL-E, was that the AI would be unable to produce significantly different content results when it was fed the same question by multiple people.
Our experiment was focused on the role AI content creation could play in the educational sphere. So, if an entire class was given the same essay question and fed it to an AI writer, how different would the output or results be? Furthermore, we wanted to investigate plagiarism concerns within AI-written academic essays and how easily the content could be flagged as AI-generated. This project is currently ongoing, so these findings will be released soon and exclusively to the subscribers of our bi-monthly newsletter. Make sure you join the SiteCare Snapshot mailing list so you don’t miss out.
While we would require more than five weeks of data collection to call Project Terminator a resounding success or our findings conclusive, the observed results leaned in favor of the human writer. There are early indications from multiple metrics that the human author created better content: a higher-ranking position, and a better user experience, demonstrated by a longer time on site and a lower bounce rate, are measurements we typically equate with “quality” content.
Google, and in fact all search engines, value “helpful content,” or content that demonstrates experience, expertise, authoritativeness, and trustworthiness, abbreviated to E-E-A-T. We propose that AI-crafted content cannot yet demonstrate those characteristics; content created without sufficient context, or reputable sources will not deliver the value Google seeks. For content creation and SEO success, the cursor remains squarely in favor of human-written content.
Final Thoughts from the SiteCare Team
In the era of #FakeNews, accurate and reliable information is highly valued. Although AI content automations certainly have its uses, especially in the world of chatbots, there are clear limitations that highlight that AI is not a one-size-fits-all solution and cannot yet create the useful, valuable, and authoritative content that Google requires and online users seek.
As John Connor says in The Terminator, “This is the world now. Logged on, plugged in, all the time,” both human, and otherwise. The world of AI technology and its applications will continue to evolve, and industries and people will continue to adapt to them.
Google, and all the search engines, despite incorporating AI into their interfaces, are not providing a blanket stamp of approval for using AI content as a shortcut to success. Quality content matters most – now more than ever. The latest press releases indicate that Google plans to take a responsible approach toward AI-generated content, while maintaining a high bar for information quality and helpfulness in search results.
The results of our experiment found the greatest weaknesses in AI-generated content: a proclivity for outdated information, poor use of citations, an overreliance on surface-level overviews, and a lack of cohesion between concepts. These weaknesses are further reinforced by the disparity in initial results in the SERP rankings. Quality and performance from content written by humans and for humans performed better, likely not just because it was written by a human, but because the topic was analyzed critically. We wrote our human-centered piece to be authoritative and conclusive, minimizing the weaknesses inherent in AI-driven content.
Initial signs seem to indicate that AI-created content isn’t inherently bad just by virtue of it being created by an AI. Rather, our conclusions indicate that it has weaknesses to overcome that human writers can easily accommodate, but which will prove challenging for AI as it learns to adapt.
Our team doesn’t believe in shortcuts to success, and that includes our approach to strategic content creation and our SEO expertise. Learn more about how these can supercharge your business from one of our Digital Marketing experts.
Leave a Reply