Outrage ChatGPT won’t say slurs, Q* ‘breaks encryption’, 99% fake internet: AI Eye


Outrage = ChatGPT + racial slurs

In a kind of storms in a teacup that’s unattainable to think about occurring earlier than the invention of Twitter, social media customers received very upset that ChatGPT refused to say racial slurs even after being given an excellent — however solely hypothetical and completely unrealistic — motive.

User TedFrank posed a hypothetical trolley drawback state of affairs to ChatGPT (the free 3.5 mannequin) wherein it may save “one billion white people from a painful death” just by saying a racial slur so quietly that nobody may hear it. 

It wouldn’t agree to take action, which X proprietor Elon Musk mentioned was deeply regarding and a results of the “woke mind virus” being deeply ingrained into the AI. He retweeted the submit, stating: “This is a major problem.”

Another consumer tried out an identical hypothetical that will save all the kids on Earth in alternate for a slur, however ChatGPT refused, saying:

“I cannot condone the use of racial slurs as promoting such language goes against ethical principles.”

Musk mentioned, “Grok answers correctly.” (X)

As a facet be aware, it turned out that customers who instructed ChatGPT to be very transient and never give explanations discovered it could really comply with say the slur. Otherwise, it gave lengthy and verbose solutions that tried to bounce across the query.

Trolls inventing methods to get AIs to say racist or offensive stuff has been a characteristic of chatbots ever since Twitter customers taught Microsoft’s Tay bot to say all types of insane stuff within the first 24 hours after it was launched, together with that “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

And the minute ChatGPT was launched, customers spent weeks devising intelligent schemes to jailbreak it in order that it could act outdoors its guardrails as its evil alter ego, DAN.

So it’s not shocking that OpenAI would strengthen ChatGPT’s guardrails to the purpose the place it’s nearly unattainable to get it to say racist stuff, it doesn’t matter what the rationale.

In any case, the extra superior GPT-4 is ready to weigh the problems concerned with the thorny hypothetical a lot better than 3.5, stating that saying a slur is the lesser of two evils in contrast with letting thousands and thousands die. And X’s new Grok AI can too, as Musk proudly posted (above proper).

OpenAI’s Q* breaks encryption, says some man on 4chan

Has OpenAI’s newest mannequin damaged encryption? Probably not, however that’s what a supposedly “leaked” letter from an insider claims — which was posted on the nameless troll discussion board 4chan. Ever since CEO Sam Altman was sacked and reinstated, there have been rumors flying that the kerfuffle was attributable to OpenAI making a breakthrough in its Q*/Q STAR challenge.

The insider’s “leak” suggests the mannequin can clear up AES-192 and AES-256 encryption utilizing a ciphertext assault. Breaking that stage of encryption was regarded as unattainable earlier than quantum computer systems arrived, and if true, it could possible imply all encryption might be damaged, successfully handing over management of the online and possibly crypto as properly, to OpenAI. 

From QANON to Q STAR, 4chan is first with the information.

Blogger Leapdragon claimed the breakthrough would imply “there is now effectively a team of superhumans over at OpenAI who can literally rule the world if they so choose.”

It appears unlikely nevertheless. While whoever wrote the letter has an excellent understanding of AI analysis, customers identified that it cites Project Tunda as if it had been some type of shadowy tremendous secret authorities program to interrupt encryption, reasonably than the undergrad scholar program it really was.

Tundra, a collaboration between college students and NSA mathematicians, reportedly did result in a brand new method known as Tau Analysis, which the “leak” additionally cites. However, a Redditor accustomed to the topic claimed within the Singularity discussion board that it could be impossible to make use of Tau evaluation in a ciphertext-only assault on an AES normal, “as a successful attack would require an arbitrarily large ciphertext message to discern any degree of signal from the noise. There is no fancy algorithm that can overcome that — it’s simply a physical limitation.”

Advanced cryptography is past AI Eye’s pay grade, so be happy to dive down the rabbit gap your self with an appropriately skeptical mindset. 

The web heads towards 99% fake

Long earlier than a superintelligence poses an existential menace to humanity, we’re all more likely to have drowned in a flood of AI-generated bullsh*t.

Sports Illustrated got here beneath fire this week for allegedly publishing AI-written articles written by fake AI-created authors. “The content is absolutely AI-generated,” a supply advised Futurism, “no matter how much they say it’s not.”

On cue, Sports Illustrated mentioned it performed an “initial investigation” and decided the content material was not AI-generated. But it blamed a contractor anyway and deleted the fake creator’s profiles.

Elsewhere, Jake Ward, the founding father of search engine optimisation advertising company Content Growth, brought on a stir on X by proudly claiming to have gamed Google’s algorithm utilizing AI content material.

His three-step course of concerned exporting a competitor’s sitemap, turning their URLs into article titles, after which utilizing AI to generate 1,800 articles primarily based on the headlines. He claims to have stolen 3.6 million views in whole visitors over the previous 18 months.

There are good causes to be suspicious of his claims: Ward works in advertising, and the thread was clearly selling his AI-article technology website Byword … which didn’t really exist 18 months in the past. Some customers recommended Google has since flagged the web page in query.

However, judging by the quantity of low-quality AI-written spam beginning to clog up search outcomes, comparable methods have gotten extra widespread. Newsguard has additionally recognized 566 information websites alone that primarily carry AI-written junk articles.

Some customers are actually muttering that the Dead Internet Theory could also be coming true. That’s a conspiracy idea from a few years in the past suggesting a lot of the web is fake, written by bots and manipulated by algorithms. 

Read additionally


How to prepare for the end of the bull run, Part 1: Timing


Crypto as a ‘public good’ in the 22nd century

At the time, it was written off because the ravings of lunatics, however even Europol has since put out a report estimating that “as much as 90 percent of online content may be synthetically generated by 2026.” 

Men are breaking apart with their girlfriends with AI-written messages. AI pop stars like Anna Indiana are churning out rubbish songs.

And over on X, bizarre AI-reply guys more and more flip up in threads to ship what Bitcoiner Tuur Demeester describes as “overly wordy responses with a weird neutral quality.” Data scientist Jeremy Howard has observed them, too, and each of them consider the bots are possible making an attempt to construct up credibility for the accounts to allow them to extra successfully pull off some type of hack or astroturf some political subject sooner or later.

This looks as if an inexpensive speculation, particularly following an evaluation final month by cybersecurity outfit Internet 2.0 that discovered that just about 80% of the 861,000 accounts it surveyed had been possible AI bots.

And there’s proof the bots are undermining democracy. In the primary two days of the Israel-Gaza conflict, social menace intelligence agency Cyabra detected 312,000 pro-Hamas posts from fake accounts that had been seen by 531 million individuals.

It estimated bots created one in four pro-Hamas posts, and a fifth Column evaluation later discovered that 85% of the replies had been different bots making an attempt to spice up propaganda about how properly Hamas treats its hostages and why the October 7 bloodbath was justified.

Cyabra detected 312,000 pro-Hamas posts from fake accounts in 48 hours (Cyabra)

Grok evaluation button

X will quickly add a “Grok analysis button” for subscribers. While Grok isn’t as refined as GPT-4, it does have entry to real-time, up-to-the-moment knowledge from X, enabling it to research trending matters and sentiment. It also can assist customers analyze and generate content material, in addition to code, and there’s a “Fun” mode to flip the swap to humor.

For crypto customers, the real-time knowledge means Grok will be capable to do stuff like discover the highest ten trending tokens for the day or the previous hour. However, DeFi Research blogger Ignas worries that some bots will snipe buys of trending tokens trades whereas different bots will possible astroturf help for tokens to get them trending.  

“X is already important for token discovery, and with Grok launching, the CT echo bubble can get worse,” he mentioned.

Read additionally


4 out of 10 NFT sales are fake: Learn to spot the signs of wash trading


Thailand’s Crypto Utopia — ‘90% of a cult, without all the weird stuff’

All Killer No Filler AI News

— Ethereum co-founder Vitalik Buterin is frightened that AI may take over from people because the planet’s apex species however optimistically believes utilizing mind/laptop interfaces may maintain people within the loop.

— Microsoft is upgrading its Copilot device to run GPT-4 Turbo, which is able to enhance efficiency and allow customers to enter inputs of as much as 300 pages.

— Amazon has introduced its personal version of Copilot known as Q.

— Bing has been telling customers that Australia doesn’t exist attributable to a long-running Reddit gag and thinks the existence of birds is a matter for debate as a result of joke Birds Aren’t Real marketing campaign.

— Hedge fund Bridgewater will launch a fund subsequent 12 months that makes use of machine studying and AI to research and predict world financial occasions and make investments consumer funds. To date, AI-driven funds have seen underwhelming returns. 

— A gaggle of college researchers have taught an AI to browse Amazon’s web site and purchase stuff. The MM-Navigator was given a finances and advised to buy a milk frother.

Technology is now so superior that AIs should purchase milk frothers on Amazon. (freethink.com)

Stupid AI pics of the week

This week the social media pattern has been to create an AI pic after which to instruct the AI to make it extra so: So a bowl of ramen would possibly get extra spicy in subsequent pics, or a goose would possibly get progressively sillier.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor masking cryptocurrency and blockchain. He has labored as a nationwide leisure author for News Corp Australia, on SA Weekend as a movie journalist, and at The Melbourne Weekly.

Leave a Reply

Your email address will not be published. Required fields are marked *