1 How an AI written Book Shows why the Tech 'Frightens' Creatives
rachelgosselin edited this page 2 months ago


For Christmas I got an interesting present from a pal - my really own "best-selling" book.

"Tech-Splaining for Dummies" (great title) bears my name and my picture on its cover, and it has glowing evaluations.

Yet it was completely written by AI, with a couple of easy prompts about me provided by my good friend Janet.

It's an intriguing read, and uproarious in parts. But it also meanders rather a lot, and is somewhere in between a self-help book and a stream of anecdotes.

It simulates my chatty design of writing, however it's also a bit repeated, and extremely verbose. It may have exceeded Janet's prompts in collating data about me.

Several sentences start "as a leading technology journalist ..." - cringe - which could have been scraped from an online bio.

There's also a strange, repetitive hallucination in the kind of my feline (I have no animals). And there's a metaphor on nearly every page - some more random than others.

There are dozens of business online offering AI-book composing services. My book was from BookByAnyone.

When I contacted the chief executive Adir Mashiach, based in Israel, he informed me he had actually offered around 150,000 personalised books, primarily in the US, given that pivoting from compiling AI-generated travel guides in June 2024.

A paperback copy of your own 240-page long best-seller costs ₤ 26. The firm utilizes its own AI tools to generate them, based upon an open source large language design.

I'm not asking you to buy my book. Actually you can't - only Janet, who created it, can buy any additional copies.

There is currently no barrier to anybody creating one in any person's name, including celebrities - although Mr Mashiach says there are around violent material. Each book includes a printed disclaimer specifying that it is fictional, produced by AI, and created "solely to bring humour and happiness".

Legally, the copyright comes from the firm, however Mr Mashiach stresses that the item is intended as a "personalised gag gift", and the books do not get offered even more.

He intends to widen his range, producing different categories such as sci-fi, and possibly providing an autobiography service. It's created to be a light-hearted kind of consumer AI - offering AI-generated items to human customers.

It's also a bit scary if, like me, you write for a living. Not least since it probably took less than a minute to create, and it does, certainly in some parts, sound similar to me.

Musicians, authors, artists and stars worldwide have actually revealed alarm about their work being utilized to train generative AI tools that then churn out similar material based upon it.

"We need to be clear, when we are speaking about information here, we actually suggest human developers' life works," states Ed Newton Rex, founder of Fairly Trained, which projects for AI companies to regard developers' rights.

"This is books, this is articles, this is photos. It's artworks. It's records ... The whole point of AI training is to discover how to do something and after that do more like that."

In 2023 a song including AI-generated voices of Canadian vocalists Drake and The Weeknd went viral on social media before being pulled from streaming platforms since it was not their work and they had actually not consented to it. It didn't stop the track's creator trying to nominate it for a Grammy award. And although the artists were fake, it was still extremely popular.

"I do not believe making use of generative AI for imaginative purposes need to be banned, however I do think that generative AI for these purposes that is trained on individuals's work without consent must be banned," Mr Newton Rex includes. "AI can be extremely powerful but let's construct it fairly and relatively."

OpenAI states Chinese competitors utilizing its work for their AI apps

DeepSeek: The Chinese AI app that has the world talking

China's DeepSeek AI shakes industry and dents America's swagger

In the UK some organisations - consisting of the BBC - have actually chosen to obstruct AI designers from trawling their online material for training functions. Others have chosen to collaborate - the Financial Times has actually partnered with ChatGPT creator OpenAI for instance.

The UK federal government is thinking about an overhaul of the law that would enable AI designers to utilize developers' material on the web to help develop their models, unless the rights holders choose out.

Ed Newton Rex explains this as "insanity".

He explains that AI can make advances in locations like defence, healthcare and logistics without trawling the work of authors, reporters and artists.

"All of these things work without going and changing copyright law and destroying the incomes of the nation's creatives," he argues.

Baroness Kidron, a crossbench peer in your house of Lords, is also strongly versus removing copyright law for AI.

"Creative markets are wealth creators, 2.4 million tasks and a great deal of happiness," says the Baroness, who is also a consultant to the Institute for Ethics in AI at Oxford University.

"The government is undermining one of its finest carrying out industries on the vague pledge of growth."

A federal government representative said: "No move will be made until we are absolutely confident we have a useful plan that provides each of our goals: increased control for ideal holders to assist them certify their material, access to high-quality material to train leading AI designs in the UK, and more openness for ideal holders from AI developers."

Under the UK government's new AI strategy, a nationwide information library consisting of public information from a wide variety of sources will also be made offered to AI researchers.

In the US the future of federal rules to manage AI is now up in the air following President Trump's go back to the presidency.

In 2023 Biden signed an executive order that intended to boost the safety of AI with, to name a few things, firms in the sector needed to share information of the workings of their systems with the US government before they are released.

But this has now been repealed by Trump. It stays to be seen what Trump will do instead, but he is said to want the AI sector to deal with less policy.

This comes as a variety of suits versus AI companies, and particularly against OpenAI, continue in the US. They have actually been taken out by everybody from the New York Times to authors, music labels, and even a comic.

They declare that the AI companies broke the law when they took their material from the internet without their authorization, and used it to train their systems.

The AI companies argue that their actions fall under "fair usage" and are for that reason exempt. There are a number of factors which can make up reasonable use - it's not a straight-forward definition. But the AI sector is under increasing examination over how it collects training information and whether it need to be paying for it.

If this wasn't all enough to ponder, Chinese AI firm DeepSeek has actually shaken the sector over the past week. It became one of the most downloaded totally free app on Apple's US App Store.

DeepSeek claims that it developed its technology for [rocksoff.org](https://rocksoff.org/foroes/index.php?action=profile