How is Editor Group using generative AI?

We’re actively embracing generative AI and are excited by its potential. But we’re also proceeding with caution, especially since we work with a lot of sensitive material for businesses and governments.
A man's hand holds a stylised graphic of AI tools
Share
AI

Non-spoiler alert. This isn’t an article about generative artificial intelligence that ends with a sentence telling you it was written by a computer. Instead, it is a short(ish) piece about how we’re approaching tools like ChatGPT, Microsoft Copilot, Google Bard and related solutions in our business writing, editing and research at Editor Group – and it’s been written, for good or for ill, by a human being.

In short, we’re embracing generative AI and believe it has significant potential to help us deliver more value to clients more quickly. We see it as the latest in a series of innovations that have accelerated and improved editorial work, including word processing, spell checking, the web, search engines, transcription software and content strategy tools. Most of these now draw on earlier forms of AI, so in broad terms, the technology is already widely used in our line of work.

But, given we handle a lot of sensitive material for businesses and governments and there are a range of concerns about generative AI, we’re proceeding carefully as we get to know the technology.

Useful applications

Let’s start with some of the ways we’re finding generative AI impressive and useful.

It’s great for gaining a quick overview of a subject or descriptions of common concepts and getting those results in a prose style. It’s also good when you’re not sure what you’re looking for, or if you simply want to explore ideas, by ‘chatting’ with a supercomputer that’s read most of the internet.

For example, we recently used ChatGPT to help research an article about mitigating malware for a client. We asked it to write its own article to see what was already in the public domain and usually said on the topic. We then used that draft alongside notes from an interview with an expert at our client, and other source material, to write a new and original article (yes, the old-fashioned way).

Generative AI is a powerful addition to any writers’ or editors’ toolkit in many other ways. It can:

  • write whole pieces of content (give or take the caveats discussed below)
  • provide ideas for headlines and other pieces of copy
  • summarise long pieces of text into shorter forms
  • critically assess copy and provide advice or suggested rewrites
  • flip copy between different formats, like blogs and social media posts
  • show how text can be written in different tones, such as more or less formal
  • adjust sentence construction – changing from passive to active, for instance.

These capabilities rest on the large language models that underpin generative AI. Those models have been trained using vast amounts of text from the internet, which gives them access to lots of facts, figures and anything else you can find online. It also means they have a deep understanding of the English language and how we humans tend to write things. This opens the way for many uses, though as we discuss next, the results aren’t always perfect.

Current issues

While generative AI is an amazing advance, it presents several issues that we are keeping in mind as we explore the best ways to use it as part of our research, writing or editing processes.

Confidentiality

Some of the uses of generative AI listed above require you to enter text, such as search-style questions or draft copy, into a system for it to deliver a result.

But with systems such as ChatGPT, there is no guarantee that the information entered will be kept confidential. Your input might be retained and seen by developers, or even become part of the AI’s knowledge base.

For this reason, we don’t enter confidential or other sensitive information into open generative AI tools. But note we say ‘open’ because vendors such as Microsoft are already releasing enterprise versions of their tools that offer more security features.

Accuracy and sourcing

The ‘generative’ in generative AI means the systems create new text based on a user’s request, the information the systems have access to and the language models they use. Drawing on all this, they make probability-based ‘guesses’ about what the next word in a sentence should be or – and this is the breakthrough of generative AI – what the rest of a whole piece of written material should say.

But because generative AI is essentially making an educated guess, it can be inaccurate. And where it really struggles to know the right thing to say, it often simply makes things up – or ‘hallucinates’.

It’s also worth noting that even the creators of generative AI don’t always know how it works. As Google CEO Sundar Pichai has said in an interview with 60 Minutes, “There is an aspect of this which we call – all of us in the field call it as a ‘black box’. You know, you don’t fully understand.”

Another issue is that the text created by generative AI may be a blend of novel text that has been generated by a system and material taken from existing content, such a web page. The systems can offer sources and show when things are direct quotes, but it can sometimes be hard to correlate the copy that a system has produced to the sources provided. Links to sources can be broken too, especially in ChatGPT where the first training data was from 2021 or before.

These accuracy and sourcing issues are a minefield for businesses and governments – and business writers and editors like us – who need to avoid being wrong, engaging in plagiarism or breaching copyright. Various professionals are quickly discovering the pitfalls of relying on generative AI to do their work without thoroughly checking its results, such as this embattled American lawyer.

So, as in the malware example above, we might use generative AI as a source but would typically write copy fresh. We also check underlying sources and employ traditional referencing where citations are required. And we continue to apply common sense when considering whether what we are reading is appropriate, factually correct and attributable (or otherwise verifiable). This includes instances where generative AI has been employed to summarise text.

We expect to find more instances where we need to apply tools to check for plagiarism or AI-generated content – especially where our clients are unsure whether their staff or other content creators might have used generative AI to create material on their behalf.

Speed

While generative AI can be mind-blowing (just try flipping your favourite song into a haiku), we find it can take quite a bit of time and coaxing to get the systems to provide results that are genuinely useful in those areas of our client work that it’s relevant to use them. It can take even more time to edit those results and to conduct the fact and source checking required to make them usable.

Indeed, a recent survey by MarketMuse found that only 2 per cent of marketers were prepared to publish the output of generative AI systems without editing. Of the rest, 69 per cent edited the output for grammar, 87 per cent for subject matter expertise, 59 per cent for bias and 86 per cent to check facts. The report also observes that editors find this challenging because there’s no ‘author’ to go back to, so they effectively also become writers.

All of this means it isn’t necessarily faster for an experienced writer or editor to use a generative AI solution compared to completing tasks the ‘traditional’ way. This of course includes using all the advances that have accelerated editorial work to date, such as search engines and spell checkers.

That said, we’re quickly learning how to refine generative AI prompts to get better responses, so we expect to meet the AIs in the middle. This is becoming a new skill set, called ‘prompt engineering’.

Dullness and bias

Another challenge with generative AI tools (or at least, the ones we’ve seen so far), is that the text they deliver tends to be formulaic and even a bit dull for a human to read. This makes sense when you think about it. Whatever generative AI tool you use, it is drawing on material already available online, and creating copy that represents an average of how everyone has written something before.

The results produced by generative AI systems also reflect the data they’ve been trained on, which can mean they contain embedded biases. These might be obvious biases, such as those relating to race or gender, or more subtle commercial or political ones. Whatever the case, it’s important to review results carefully and look into any sources a system shows it has used.

But what do you think?

Generative AI is a breakthrough that further builds on the web and existing tools we’ve all learned to use, like search engines and spell checkers. For this reason, we expect it to quickly become an accepted part of how we all search for information, and how we write and edit content.

We also think some of the current information-security and sourcing issues will dissipate as the tools are integrated into organisations’ technology environments, we all get more familiar with generative AI’s strengths and weaknesses, and as these new systems are refined and improved. That’s not to say that people won’t use them for dubious reasons, such as creating fake news, but that’s another debate.

Finally, we’re very conscious we are only scratching the surface in terms of what generative AI can do for us and our clients, and the challenges it presents. So please let us know what you think and how you’re approaching generative AI via inbox@editorgroup.com or your regular contact at Editor Group.

Grant Butler is the founder of Editor Group and a former technology journalist.

More Insights

Inclusive languageSocial mediaWriting
BooksEditingReadingSydney Writers' Festival
Content strategyMarketingSocial mediaWriting
Scroll to Top
Editor Group

The right words to help you grow sales, deliver messages and meet your compliance needs.