Check Out Our All New Podcast Resource Center!
April 19, 2024

Integrity in The AI Era – The Need For Content Verification with Jon Gillham

Podcast Episode 196 of the Make Each Click Count Podcast Jon Gillham, the innovative mind behind Originality AI.

In this timely conversation, they explore the inception of Originality AI, the challenges it addresses within the content marketing sphere, and the pervasive influence of generative AI across the web.

Jon unveils the technological wizardry that powers their AI detection system, the ethical concerns of AI-generated content, and how Originality AI is staying ahead in the cat-and-mouse game of AI content identification.

Stay tuned to learn why content authenticity is crucial for maintaining your audience's trust and how Originality AI is leading the charge in safeguarding content integrity online.

Learn more:

Website

LinkedIn

 

ABOUT THE HOST:

Andy Splichal is the World's Foremost Expert on Ecommerce Growth Strategies. He is the acclaimed author of the Make Each Click Count Book Series, the Founder & Managing Partner of True Online Presence and the Founder of Make Each Click Count University. Andy was named to The Best of Los Angeles Award's Most Fascinating 100 List in both 2020 and 2021.

New episodes of the Make Each Click Count Podcast, are released each Friday and can be found on Apple Podcast, iHeart Radio, iTunes, Spotify, Stitcher, Amazon Music, Google Podcasts and www.makeeachclickcount.com.

Transcript

Andy Splichal:

 

Welcome to the Make Each Click Count podcast. I am your host, Andy Splichal, and today we are venturing into the realm of AI content verification. We have the pleasure of speaking with John Gilliam, the visionary behind originality AI. His innovative approach to tackling the complexities of generative AI content has garnered recognition for major publications worldwide. A big welcome to John Gillum. Hi John.

 

 

 

Jon Gillham:

 

Hi. Yeah, thanks for having me. Excited to talk. Yeah, all things AI and content marketing.

 

 

 

Andy Splichal:

 

Yeah, you know, it's definitely, it's on the forefront of everybody's, what everybody's doing. I mean, it's getting into every industry and so it's a really untimely subject and I'm glad you could join us today. I guess. Let's, let's start. Can you share the inception story of originality AI and what specific gaps that you guys have aimed to fill in the market?

 

 

 

Jon Gillham:

 

Yeah, sure. So, yeah, my background was in mostly content marketing related businesses where we were like building websites ranking in Google, and then from that had built a content marketing agency where we were selling written word content to expert subject matter experts, down to some lower cost content as well to different buyers. One of the challenges that we had through 2022, or started 2021, 2022, was being able to communicate to purchasers of the content what our controls were around the use of AI. And this predated chat GPT. This was tools like Jasper, some of those first OpenAI GPT-3 wrapper writing tools. So then from those challenges, we sort of saw the need for a more robust quality control tool for content publishing and specifically publishing content on the web. And then that was where originality was born out of. And we launched the weekend before chat GPT launched.

 

 

 

Andy Splichal:

 

So, I mean, I guess that's good timing for sure. But what, I mean, how big is the problem with plagiarism and AI content generated? I mean, I don't know if people even realize, at least some people don't realize that there's a problem out there.

 

 

 

Jon Gillham:

 

Yeah. It's still shocking how deep into this we are. First few months, I understood why some people weren't aware. Um, it's shocking when people say, like, oh, we know our writers don't use AI. And then we run an analysis and say, you know, it is statistically impossible that we can get these results for the amount of AI that's on your site and your writers haven't used AI. I would say if there's a place that is publishing content and they don't have mitigation, mitigating steps in place AI content, whether they. Maybe they're okay with that, maybe they're not okay with that, reasonable decisions on both ends. But there's simply, I think, no place where written word is showing up and AI is not infiltrating that.

 

 

 

Andy Splichal:

 

So what's the technology? I mean, how are you able to detect if AI is being used? How does your system work?

 

 

 

Jon Gillham:

 

Yeah, so the sort of, the simple explanation is like, we're the good Terminator. Like, sort of like terminator posts in like, movie, movie two, so able to identify other, other terminators. So it's our own AI that's been trained to supervise learning on, at this point, millions of records of known human content, so human text that predated 2019, and then millions of copies of AI generated content. And then it learns to tell the difference between the two. And that's what AI is, you know, really exceptional at, is being able to, like, connect dots, that humans are great at thinking they can see patterns. AI is great at seeing patterns.

 

 

 

Andy Splichal:

 

And what, I mean, what's the problem with it? I guess if you are hiring a writer who's using some AI generated content, why should somebody be concerned?

 

 

 

Jon Gillham:

 

Yeah, two reasons. First is sort of just like a general fairness, you know, if you're happy to pay a writer $100 an article, $1,000 an article, whatever it might be, you're not super happy to find out that they copied and pasted it out of chat GPT in 5 seconds. So that if you're okay with publishing AI content, then you also want to be the one that receives the efficiency value of that. So that's step 1. Second reason is, and probably the one that gets most people excited and open to debate about it, is Google's view on AI generated content. In my view, and I think in a lot of publishers view, AI generated content is a greater risk. Publishing a generated content is greater risk in the eyes of Google. And that is risk that you want to be the one that is accepting if you're publishing a generated content, not the writer's decision.

 

 

 

Jon Gillham:

 

I mean, the sort of logic around the risk from Google, and we've seen some studies that, and we've done our own studies that sort of validate this, but it is an existential threat to Google. If Google search results are nothing but AI generated content, then people will just go to the AI. That might be Google with its SGe, but if the search results are nothing but AI, people will just go to the AI. And so Google is now needing to fight this AI spam onslaught that they're receiving and they're fighting back against that AI spam. And I think innocent sites that didn't think they were using AI, that didn't think their writers were using AI, um, actually were, and got caught up in the, the merch, the March 5 manual action update.

 

 

 

Andy Splichal:

 

So, I'm sorry, what, what update were you referring to?

 

 

 

Jon Gillham:

 

So, on March. So Google's in the process of rolling out an update. There's, there's two parts to that. Well, a couple parts to the update, but so basically there's an algorithmic update and then there was a manual action update. That manual action was where they had gone. And based on the 75,000 websites that we had looked at, they had de indexed 2% of those websites. When we looked at this websites that had been de indexed, the majority of them had a very, very heavy use of AI generated content being produced at mass. And so a generated mass produced content was the most significant factor for these sites that had been deindexed.

 

 

 

Jon Gillham:

 

Some of those sites had reached out and said we didn't think we were using, we didn't think our writers were using AI when they were, in fact, they had been using AI and had gotten, their business had gotten evaporated out.

 

 

 

Andy Splichal:

 

Of Google now, 2%, I mean, that's not a very big percentage, especially with how many people are using AI. Is that going to change in the future, you think? Is it going to be more that Google's going to look at?

 

 

 

Jon Gillham:

 

So I think, you know, our takeaway from. So we did, we did that study looking at the sites that had had a manual action applied to them, and then we saw 2% of the sites that we had looked at being de indexed. Um, and then we also looked at what is the percent of AI content within Google search results. And what we saw was that even with that sort of, to your point, only 2% of sites being d index, which, which, you know, felt like a lot of sites, but, but was 2% the amount of AI in Google search results increased to a record high in March at 12%. And so I think what that tells me is that Google has won the battle on March 5, but is losing the war in terms of the amount of AI content still in its search results. And so to your question, I do think Google is going to, Google is struggling to deal with Google can identify it, but is struggling to algorithmically deal with it and is going to continue to make significant updates attacking AI spam.

 

 

 

Andy Splichal:

 

So, I mean, AI, it's adapting just at the level it's adapting is mind blowing. I was talking to somebody who has an SEO program who the AI, they're using AI to write it all and then they brings it in and then they're using AI to reread it to see what sounds too much like AI and then telling people to, you know, to redo that piece themselves, I guess with originality AI, how are you preparing to meet the ever changing challenges of staying up with that?

 

 

 

Jon Gillham:

 

Yeah, so one of our sort of my most fun aspects of the job is we have what we have what we call a red team and a blue team. And so our red team is always trying to bypass detection, and then our blue team is our live model that is always trying to get better and better. And so how do we stay on top of it? We try and be the ones that are sort of beating our own model. And by doing that, we continue to improve our main models. We have a red team that's basically running around trying every strategy, building our own custom AI models to try and bypass our own detector, and then training on that data set to continue to get better and better.

 

 

 

Andy Splichal:

 

So can you explain the user experience of using originality AI?

 

 

 

Jon Gillham:

 

Yeah. So the majority of our users are people who have hired a writer and then are receiving a piece of content from that writer. And then they take that piece of content, put it into originality, and then it helps ensure that piece of content meets whatever specifications and requirements that you, as the person who have hired a writer, is after. So if you've hired a writer and you don't, and you told them to not use AI, originality tells you that. I think it's table stakes for the last 15 years, 20 years. But we also have a plagiarism checker to make sure that the content hasn't been copied. We have a fact checker which is still in beta, not working great. And then we have a readability score to help make sure in the end that that piece of content is meeting the specifications that you're after and so that you can then take that piece of content and hit publish with confidence and with integrity, knowing that it's hasn't been copied, hasn't been AI generated factually accurate, that working on that and meets all the other sort of requirements that you're after.

 

 

 

Andy Splichal:

 

So if, I mean, if you're hiring somebody out, you're hiring a writer out, I absolutely can see you don't want them to use AI, you want it to be original content. What if you are a smaller e commerce mom and pop store just trying to improve your SEO and you are using AI? Does originality AI help in any way with that kind of, I guess. Is there a score on how detectable the AI is or anything like that? Yes.

 

 

 

Jon Gillham:

 

It gives you a probability of the chance that that document was created by AI. We do have a lot of users that use our platform to try and if they feel like if they can trick the originality, then they can trick Google. We don't love that. We don't think that use case makes laws, although it drives up usage on our platform. We don't recommend our platform to be used for that reason. If you've used AI, you're happy with that decision. To use AI generated content, that's great. That's your decision.

 

 

 

Jon Gillham:

 

I think there's risk associated with that, but if the benefits outweigh the costs, go for it. I have websites where I've used AI, they were impacted this past month. Our platform still can be helpful and again, making sure things aren't plagiarized, fact checking and ensuring readability score is where you want it to be. And then we have some other features in the works as well. But yeah, if you're using AI, great. Working to try and trick an AI detector so that by extension you're potentially tricking Google. I don't love that use case. Some people know, I've had this conversation with people and they've still chosen to use it in that capacity.

 

 

 

Andy Splichal:

 

So with your journey with originality AI, what have been some of your most surprising findings regarding AI generated content?

 

 

 

Jon Gillham:

 

So, and we talked about this a bit like my background was in this world of web publishing, SEO, content marketing. I've been surprised at the pervasiveness of AI has polluted, is too strong of a word, but it has gotten everywhere. We look at reviews, it's there. We look at books being published, it's there. We look at patents, we look at journal articles, it is everywhere. And the societal view on where it is acceptable to have generative AI and not, is not well, yet well defined. I think we can all agree that we probably don't want to read Amazon reviews and then find out that they were all AI generated Amazon reviews. And that's a pretty clear.

 

 

 

Jon Gillham:

 

We're not happy as a society if all reviews that we see online are AI generated and we can't trust them. And so that's been the surprise is just the pervasiveness of it and how it is just spreading everywhere.

 

 

 

Andy Splichal:

 

Yeah, no, I can see that. Are there anything in the works? I mean, Amazon, I think, I'm surprised they don't have an AI detector or I guess, what laws do you see coming out or guidelines in that where AI won't be allowed?

 

 

 

Jon Gillham:

 

Yeah, I think so. I think it's not safe to say that some of those companies don't have detectors. I think they may. And a lot of companies we've seen medium two days ago saying that, um, we're for humans writing, not, not AI generated. We're for human stories, not a generated stories. Google Merchant center has said that you have to declare if your content, your text was AI generated. And Amazon has said if you've written you, if you've written a book, you need to declare if it was a generated self declare. So I think, I think that's, it's companies are all making their own decisions right now on what they require from a user, a user disclosure standpoint.

 

 

 

Jon Gillham:

 

And then from a regular regulation standpoint, I think it's very, very, very hard to enforce tax generation. I think there's societal harm to tax generation. But I think since we have as a society, have sort of 30 years of not believing the information you read on the Internet at face value, that there's, we have sort of this history of like, okay, you got to like, validate some information before we just believe it. And so, like, there's, it's hard to have text be as consequential as. I think there are some significant societal costs to images and video and I think political costs. And so I think the sort of those that are responsible for making the laws, aka the politicians, are more fearful of AI generated text, a generated image and video deepfakes. And so I think that's where we're going to see some regulation.

 

 

 

Andy Splichal:

 

Now, are, are there any business books out there or thought leaders that you can, that have significantly influenced your perspective on AI or ethnics, ethics and technology?

 

 

 

Jon Gillham:

 

Yeah, I think Jeffrey, Jeffrey Hinton, godfather of AI, he's, you know, his content has been really, really helpful, that he puts out. It's a interesting space where Sam Altman's views on both alignment and pushing product to market I think has been interesting and influential. It's an interesting space where a lot of the leaders of many of the most important companies in AI have been sort of more public than almost any other type of type of company, whether it be Elon or Nvidia or Sam Altman. I think the amount of content that is out there from them is pretty interesting to be able to follow along.

 

 

 

Andy Splichal:

 

Now back to originality AI. Is there a success story of a client where it is dramatically affected their business after using your service?

 

 

 

Jon Gillham:

 

Yes, we had a writer writing marketplace, writer access, come out fairly early on and take a fairly aggressive stance against AI generated content. And I think that has been really influential. And in a world where those businesses got significantly disrupted, they were able to, I think, have a, have a really that sort of strong positioning. Leveraging our technology to ensure that they could provide some transparency between the writer and the, and the client, I think has really helped their business continue to grab market share in a volatile time for them.

 

 

 

Andy Splichal:

 

Now, for a listener, if they are interested in learning more about originality AI, what's the best way to get started? And is there any resources, support system in place to assist new users?

 

 

 

Jon Gillham:

 

Yeah, so we have unique for the price point that we're at. So best place to start, originality AI, low cost subscription to get started, you can use the tool for free to test it out. And then in terms of resources, we have, again, unique for sort of the price point product that we're at. We have a live support agents, not AI, real human support agents that are available almost 24/7 now. Not quite there, but they're available. And we put out a ton of studies related to the efficacy of AI detectors. And so if you go to our site, look at the bottom, there's a lot of studies related to AI detector accuracy. Detectors aren't perfect.

 

 

 

Jon Gillham:

 

They have some false positives, they have some false negatives, but we put out a ton of studies on a bunch of different data sets to try and help people understand the limitations.

 

 

 

Andy Splichal:

 

Well, John, your insights today have been invaluable in understanding the critical importance of content integrity. Thank you for coming on the show. Is there anything else you'd like to add before we wrap it up today?

 

 

 

Jon Gillham:

 

No. I appreciate the opportunity to talk about this topic. I think it's significantly important for all of us that are playing on the web right now.

 

 

 

Andy Splichal:

 

All right, well, listeners, we've delved into the heart of AI content verification today with John Gilliam uncut over the power of originality AI and maintaining integrity of digital content. If you want to learn more about their innovative solutions, head on over to originality AI. Remember, the authenticity of your content shapes the trust of your audience. Thank you for tuning in to the make each click count podcast. I hope you enjoyed this episode. If you did, please go on over to Apple Podcast and leave us an honest review. That's it for today. Remember to stay safe, keep healthy, and happy marketing I'll talk to you in the next episode.