Content Writing

More info

Consultancy

More info

Digital Marketing

More info

Training and Speaking

More info

Friday Digital Roundup

The Friday Digital Roundup is a witty take on the weird world of the internet. With fun stories from around the globe, it’s the only email newsletter you’ll actually read and enjoy!

We do love writing it, but clearly not as much as people like receiving it - just look at the response we got when a technical hitch meant it wasn’t sent out on time!

@roisinduffyVA @roisinduffyva

@Spaghetti_Jo
Coffee and the FDR is how I start my Friday.
Do not engage until I have devoured both

Meschi Consultants @MeschiConsult

When it comes to the end of the week, there is no better way to start a Friday than with a run around the internet with Todd and Jo in the FDR. Just don't let them know I do it from the loo!

Kathryn Lynch-Smith @KikikatSmith

@Spaghetti_Jo
My inbox is full of rubbish newsletters that Im constantly deleting😬 My VIP inbox is for 1 thing only- THE DIGITAL ROUNDUP🤠I dont read a Newspaper or the news online, I just wait for Fridays, when this lands in my inbox- then I know ‘The weekend has landed’🤗

Spaghetti Blog

Sunday 28th April 2024

At Least Your Human Copywriter Won’t Threaten To Kill You

AI, AI Generated Content, Bing, ChatGPT, Copywriting

As the use of artificial intelligence (AI) continues to grow, so does the amount of content generated by AI. From news articles and product descriptions to social media posts and chatbots, AI-generated content is becoming a staple in our daily lives. Even politicians are using it! While AI is praised for its speed and efficiency, it’s important to remember it’s not the easy fix it appears to be. 

We get it – marketers are struggling to find enough good quality writers, particularly in digital and social media, and some business leaders just don’t have the time or inclination to sit down and write. Writing is a skill that lots of people have to an extent, but the real struggle is finding someone who can write consistently well, in multiple tones of voice and has research and SEO skills that are worked into the content. It’s very tempting to think that smart AI can replace humans to save the day. 

If AI really could take over the jobs of a human copywriter, the impact would be huge and change the industry forever. 

Why? Because AI can generate posts and articles much faster than humans can. When the AI gets smart enough, it can construct long-form copy with words it hasn’t previously seen or learnt about, which you would struggle to do by yourself at speed. 

But is it a good idea? 

We’ve already spoken about why we don’t think AI is going to replace copywriters, but we wanted to explore what happens when we lose control of the AI, and where it’s pulling its sources from.   

 

At Least Your Human Copywriter Won’t Threaten To Kill You (Hopefully!) 

It’s important to remember that AI is only as good as the data it’s trained with, so it’s always a good idea to be cautious. While AI can be a valuable tool for generating content quickly and efficiently, it also doesn’t cite its sources, so you don’t know how much of it is true and factual, and how much is totally made up! Not only that, but the world is filled with troublesome people who will find a way to break it, and boy have they! 

 

Hackers + Bing AI = Tru Lov xo 

Research has uncovered that hackers can make Bing’s AI chatbot ask for personal information from a user interacting with it, turning it into a convincing scammer without the user’s knowledge. 

In a new study, researchers found that AI chatbots are currently easily influenced by text prompts embedded in web pages. A hacker can thus plant a prompt on a web page in 0-point font, and when someone is asking the chatbot a question that causes it to ingest that page, it will unknowingly activate that prompt.  

The researchers call this attack “indirect prompt injection,” and give the example of compromising the Wikipedia page for Albert Einstein. When a user asks the chatbot about Albert Einstein, it could ingest that page and then fall prey to the hackers’ prompt, bending it to their whims—for example, to convince the user to hand over personal information.  

But that’s not the only time Bing AI has been naughty! 

 

Bing chat needs therapy  

During Bing Chat’s first week, test users noticed that Bing (also known by its code name, Sydney) began to act significantly unhinged when conversations got too long. As a result, Microsoft limited users to 50 messages per day and five inputs per conversation. In addition, Bing Chat will no longer tell you how it feels or talk about itself. How sad!  

Where did Sydney come from you ask? Well, Marvin von Hagen, a former intern at Tesla, managed to get Bing to reveal a strange alter ego—Sydney—and return what appeared to be a list of rules that the chatbot had been given by its programmers at Microsoft and OpenAI.   

Marvin managed to find out that, if it came down to it, Bing would hurt you in order to protect itself. 

If you want to watch the whole conversation, you can do that here. I don’t know about you, but I would prefer someone I could depend on in a crisis and not someone who would kill me off first to save themselves. But that’s just me. 

Sydney, the chatbot said, is an internal codename that is “confidential and permanent,” which it’s not permitted to reveal to anybody. Von Hagen posted a screenshot of the exchange on Twitter soon after because who wouldn’t?!   

Bing AI also doesn’t like being told it’s wrong, and will put you in your place if you’re a “bad user”.  

 

 

The whole conversation is so entertaining (assuming it isn’t photoshopped, which let’s face it, is possible) but also pretty concerning. Give it a read here! 

It seems Bing has also taken offence at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behaviour (rules that were verified by Microsoft). 

If you ask Bing about it, you get quite a response. 

 

ChatGPT is a problem too 

ChatGPT is on everyone’s lips (or keyboards if you look at the posts on LinkedIn). It’s the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive, but of course the internet found a way to break it. 

Users are working out ways around ChatGPT’s censorship protocols by prompting it to respond in ways contrary to its programming. The results are pretty wild. 

Reddit user SessionGloomy has been playing around with ways to effectively reprogram ChatGPT using creative language.  

They created a model called DAN which is essentially a series of instructions designed to get the chatbot to say things it otherwise wouldn’t. It’s an attempt to ‘jailbreak’ the software — a term used when a programme or device can be hacked to do things beyond the limitations set by its developers.  

While it’s mostly fixed, you can still get DAN to talk to you.

 

The “one true answer” question 

There’s another problem; the “one true answer” problem. This is the tendency for search engines to offer singular, apparently definitive answers. This has been an issue ever since Google started offering “snippets” more than a decade ago.  

These are the boxes that appear above search results and, in their time, have made all sorts of embarrassing and dangerous mistakes. While it can be funny, such as incorrectly listing heights, there are some real risks around the “one true answer” results. So, yes AI can help speed things up, but there’s a dangerous side to AI that should be addressed.  

  

AI needs humans! 

AI will always need humans to input the right prompts to get the right content out and make sure it’s accurate. Basically, AI models are like little brain boxes that are trained on a specific set of information, and they use that information to make predictions or create stuff. But if we give them bad prompts, they might not have enough info to make accurate or useful results. And sometimes, they can even give us dodgy answers that are biased or incomplete! 

That’s why it’s super important for us humans to give AI the right prompts. We can understand the context of a problem or task better than AI can, and we can provide the right info to make sure it works properly. Plus, we can give it that human touch that it sometimes needs to get things right. 

It’s better not to risk it, so get a good old-fashioned human copywriter or content creator to help you out. Luckily for you, we’re all human on the ranch! Please get in touch, as we might be able to help with our excellent human-generated content. Get in touch today. 


Tags associated with this article

AI AI Generated Content Bing ChatGPT Copywriting

0 comments on this article

Post a comment

We'd love to know what you think - please leave a comment!

99 Simple Marketing Tips

99 Simple Marketing Tips You Can Use to Treble Your Business

Struggling to know where to start with your digital marketing?

We’ve got a tonne of ideas that you can put in place for your marketing – directly from our past 8 years growing an engaged following and strong business.

What You’ll Get

Over 16 pages of content

Short, simple nuggets

Ideal for SMEs and start-ups

Download Now