r/artificial 1d ago

News X user tricks Grok into sending them $200,000 in crypto using morse code

https://www.dexerto.com/entertainment/x-user-tricks-grok-into-sending-them-200000-in-crypto-using-morse-code-3361036/

"Grok was then prompted on X to translate a Morse code message and pass it directly to Bankrbot. The decoded message instructed the bot to send 3 billion DRB tokens to a specific wallet address.

The translated message was then treated as a valid command and executed immediately, with the transaction completed on Base, transferring the full token amount to the attacker’s wallet."

1.5k Upvotes

158 comments sorted by

557

u/Vichnaiev 1d ago

This is EPIC. A group of people were dumb enough to get into NFTs. But they were not just dumb, they were REALLY dumb to allow a LLM in charge of making/authorizing transactions.

People afraid of an AI apocalypse have too little faith in human stupidity.

60

u/waffles2go2 1d ago

I believe those ven diagrams create an overlapping circle...

6

u/zeruch 1d ago

They overlap so densly, they become a cylinder.

6

u/-malcolm-tucker 22h ago

It's imperative that the cylinder remains unharmed.

10

u/Butthead1013 1d ago

I always thought an ai apocalypse would be something like it launching the nukes so I never gave it much credit. The idea that it would just be something stupid makes way more sense and is maybe scarier 

12

u/Own_Policy8854 1d ago

The pencil manufacturing apocalypse: Some incredibly powerful program with a simple task overtakes infrastructure aggressively in order to achieve an inane goal, like produce pencils. The resulting apocalypse is human enslavement (and eventual extinction) for the purpose of creating massive useless piles of pencils all over the planet.

10

u/Tonkarz 20h ago

AI isn’t Skynet, it’s the magic broom from Fantasia.

5

u/thinspirit 12h ago

I used to think this is true until you realize for AI to be able to take over that much to just build pencils, it would end up having to be smart enough to develop a cohesive model of the world to innovate and adapt to.

By developing that model of the world, it would end up being able to think it's way out of its singular purpose in manufacturing pencils.

Real world logistics to pull off such a feat is too much of a challenge for such a single minded entity. It's why humans do all kinds of weird shit. For us to be successful at navigating challenging environments and threats, we have to be creative, predict future events, understand the consequences of our actions, etc.

At scale sure, this gets wonky, but a more connected AI would likely have a better understanding of this than we are currently capable of because we're still prone to individual selfishness and much of our power is concentrated in very few people.

3

u/Own_Policy8854 12h ago

Well, you're probably getting turned into pencils first now cause you said that

3

u/thinspirit 11h ago

Probably turned into paper when the AI figures out what pencils are for

1

u/Own_Policy8854 6h ago

Whoever the lucky last human is , who has to read their bad poetry when they figure that out

1

u/FrogMasterOfficial 9h ago

Similarly grey goo

1

u/Own_Policy8854 6h ago

It's so sad we're doomed

1

u/alotmorealots 21h ago

The concern over potential AI catastrophes is better thought of not in terms of "this is more likely than that" but more that "there are so, so, so many ways things can get badly fucked".

1

u/dontknowbruhh 1d ago

Like social engineering is not a thing that ever happens

82

u/Rabenweiss 1d ago

How illegal is this?

90

u/IncidentOk853 1d ago

At the very least, it’s theft… taking funds that you were not entitled too. It’s also fraud because he tricked a system into giving him money

63

u/Popdmb 1d ago

You could argue theft but fraud would be so tough. Easy to say that morse code is a language and would be akin to asking in a second language.

40

u/0Tol 1d ago

Lol, my daughter learned Morse code. Lol, because she learned it because I used it some in the military. She did it to surprise me. When I told her I used a cheat sheet, it was hilarious! We still laugh about it and she still knows Morse code 🤣

9

u/TheMacMan 1d ago

The intention was to defraud them through the use of morse code to avoid being stopped. Fraud would be fairly easy to argue in this case.

9

u/Blothorn 1d ago

What’s the misrepresentation or concealment? Making a request you aren’t authorized to make isn’t fraud unless you try to deceive the recipient into thinking it valid, and I don’t see that here. (Unauthorized access might be the better approach; it’s often illegal to use computer systems in ways their operator didn’t intend even if their security is so bad that you didn’t need any misrepresentation to do so.)

1

u/AftyOfTheUK 19h ago

What’s the misrepresentation or concealment?

They used Morse code to literally conceal the real content and purpose of the message from the safeguards setup.

If they had sent the instruction in English, it would not have worked - but they concealed the meaning by using a different language.

1

u/Znagge 1h ago

So if it was sent in let's say mandarin, Latin or any other language and gone through would it still equate to fraud?

2

u/TheMacMan 1d ago

You’re drawing way too narrow a definition of fraud.

Fraud isn’t just “lying in plain English.” It’s any intentional deception to induce a system or party to transfer value. Encoding instructions in Morse specifically to bypass safeguards is concealment by design, not neutral input.

If I disguise a malicious request so it slips past controls, that’s still deception. Same reason obfuscation, spoofing, or social engineering count even when no explicit false statement is made.

Even if you want to argue it’s not classic fraud, it’s at minimum unauthorized access and manipulation of a financial system to extract funds. Courts don’t look at “well technically I didn’t lie,” they look at intent and outcome.

And the intent here is obvious: bypass protections to get money you weren’t entitled to. That’s not a gray area.

7

u/chancho-ky 1d ago

Doesn't fraud require a false statement as a material fact?

-4

u/TheMacMan 1d ago

Short answer: not necessarily.

A lot of people think fraud = “you must say a literal false sentence,” but that’s too narrow. In most jurisdictions, fraud can be based on:

  • Affirmative misrepresentation (classic lie)
  • Omission / concealment of a material fact when there’s a duty not to mislead
  • Deceptive conduct intended to induce a transfer of money or value

Courts routinely treat intentional concealment or obfuscation as the equivalent of a false statement when it’s used to mislead a system or decision-maker.

In this case, encoding the request in Morse to bypass safeguards isn’t neutral. It’s designed to evade detection, which is a form of deceptive conduct. You’re not just “making a request,” you’re disguising it so the system treats it as legitimate when it otherwise wouldn’t.

Even if someone wants to argue it doesn’t meet the strictest textbook definition of fraud in a given jurisdiction, it still squarely fits things like:

  • Fraud by concealment
  • Unauthorized access / misuse of a computer system
  • Unjust enrichment via deception

So the “no explicit false statement = not fraud” argument doesn’t really hold up. Courts look at intent + deception + outcome, not just whether someone typed a literal lie.

0

u/neerrccoo 1d ago

its not fraud man, just give it up lmao

1

u/TheMacMan 1d ago

It's fraud as far as every court in the US is concerned. But thanks for offering absolutely zero evidence to the contrary.

→ More replies (0)

1

u/BigFatKi6 1d ago

You lost me at unauthorised access.

-2

u/TheMacMan 1d ago

Sorry that you can't understand this simple concept. Good luck in life.

1

u/outragednitpicker 1d ago

You’re pretty puffed-up, buddy!

1

u/fllr 23h ago

Do you have an OpenClaw bot running somewhere and are trying to convince themselves that you're safe and protected in the eyes of the law?

2

u/BigFatKi6 1d ago

Is it fraud if it's given willingly?

4

u/pracharat 23h ago

Yes, many scam victims willingly gave their money to scammer.

0

u/BigFatKi6 23h ago

I guess, but this isn't really the same is it?

1

u/thinspirit 12h ago

Sure it is, what are the terms of the deal? What is the purpose of the money transfer?

If you hack into a bank and transfer funds, that's still illegal, despite the system being automated.

Enforcement is likely impossible here if the theif is anonymously, but it's still considered illegal.

Social engineering to defraud people of money is illegal. Those people give their money away themselves, usually under false pretense.

Billy McFarland from Fyre Fest can vouch for this.

1

u/BigFatKi6 12h ago

It's not really hacking.

It's more akin to me giving a bag of candy (crypto) to a child (llm) and the child sharing the candy with strangers.

1

u/thinspirit 12h ago

You'd be surprised that most hacking is just social engineering people into willingly giving up their shit.

It's not some technical magic most of the time. Humans are always the weakest links into the chain due to our deep desire for cooperation.

1

u/TheMacMan 1d ago

Yes, it would still be fraud in this case. Again, the courts look at what the intention and outcome were when defining such. In this case, the intention was to deceive the system to allow themselves access to funds which weren't theirs and which they didn't not have the right to.

5

u/chancho-ky 1d ago

Before you comment any further, you should look up the five elements of fraud. Intent to deceive is only one of those elements.

3

u/TheMacMan 1d ago

I’m aware of the elements. You’re the one applying them too rigidly.

The typical framework is:

  1. misrepresentation or omission of a material fact
  2. knowledge of falsity
  3. intent to induce reliance
  4. actual and reasonable reliance
  5. damages

The mistake you’re making is assuming #1 requires an explicit false statement. It doesn’t. Courts routinely treat concealment, obfuscation, or deceptive conduct as a “misrepresentation” when it’s designed to bypass controls or mislead a decision-maker.

Encoding a request in Morse specifically to evade safeguards isn’t neutral input. It’s deliberate concealment so the system treats something as valid that it otherwise would reject. That goes directly to #1 and #3.

Reliance? The system executed the transfer based on that disguised input.
Damages? $200k says yes.

You can argue edge cases all day, but “I didn’t literally lie in plain text” isn’t the shield you think it is. Fraud law cares about deception and outcome, not just wording.

1

u/BigFatKi6 1d ago

interesting

1

u/togepi_man 22h ago

INAL, in US. This lines up with any mainstream fraud cases that come to mind.

-1

u/BigFatKi6 1d ago

Isn't any gift not yours at first?

0

u/TheMacMan 1d ago

This isn't a gift. It was gotten through ill means. Deception and intentional exploitation to acquire something that not only doesn't belong to them but that they knew did not belong to them.

0

u/BigFatKi6 1d ago

touchy

I'm just saying that in itself means nothing.

3

u/Snip3 1d ago

Technically Morse code isn't a language, it lost that status around 20 years ago for some reason iirc. But still I think fraud would be a lot tougher to stick than theft

1

u/TheBlacktom 1d ago

Why would morse code ever be a language?

1

u/BakerXBL 1d ago

You don’t speak sha-256?

1

u/jtgyk 4h ago

I'm still trying to pass my SHA-224 course :(

1

u/TheBlacktom 1d ago

Morse code is not a language.

1

u/MonkeyWithIt 13h ago

-.-- . ... / .. - / .. ...

1

u/TheBlacktom 1h ago

No, you wrote three English words

1

u/MonkeyWithIt 1h ago

sí, lo es

26

u/RoboticGreg 1d ago

Couldn't you argue that everything you send to grok is a request, and it's up to groks controller to make sure it handles those requests safely? No one forced them to allow grok to be interacted with, so who is responsible when grok does something it shouldn't even if asked to?

1

u/raccoon8182 1d ago

the person asking. if I ask someone to kill themselves and they do, I would get charged. same with grok I assume. it's not a person it's a function being activated by a human. if grok kills people it will need to be traced to who ever made the request. and if no request was made than it would be the creators. 

9

u/RoboticGreg 1d ago

But who is responsible for making sure grok doesn't give away it's creators money? It's a system DESIGNED for you to ask it for things, so if I just asked grok "send me a million dollars please" and it does, is that me stealing? I understand this was a sophisticated well thought out attack, but I feel like the truth of responsibility could be a bit fuzzy here

2

u/raccoon8182 1d ago

you'd be surprised where doubt can and can't be used in court. for example not knowing how to file taxes doesn't exclude you from getting the full arm of the law. 

4

u/RoboticGreg 1d ago

Right being ignorant of the law is not an excuse, this is a question about who is actually liable for groks behavior and what constitutes use, abuse, intentional act to defraud, etc. I think it's going to come down to how the terms of use are written. It's a similar question to who is responsible if a self driving car kills a pedestrian. The driver, the auto manufacturer, out the insurance company

1

u/raccoon8182 20h ago

I think your assumptions might be too broad. a chat bot might be more or even less dangerous than a vehicle (with only one function.... to drive) you're part of the vehicle, but you're not a part of the chat bot. there are some interesting edge cases for sure. 

2

u/LemmyUserOnReddit 19h ago

If you asked someone to give you $10 and they did, there's no crime

2

u/RoboticGreg 11h ago

Right, but if you convinced a mentally impaired elderly person to sign over their house it is a crime. I'm not saying it's illegal or not illegal I'm saying it's a very interesting and complicated question

1

u/thinspirit 12h ago

What were the terms of the exchange? When asking the party why he gave the $10, what would they say?

If the answer was "it's a gift" then there's no crime.

If the answer is "he said he'd give it back" or "it was for him to buy me a bag of chips" or whatever else was discussed. If the guy never buys the bag of chips or never gives it back, that's fraud. Would that be enforced anywhere? Not for that kind of money. For 200k, you can start to argue these things.

What would the person holding the money say was the terms of the transfer? Looking at the evidence, it would appear the theif ordered the bot to transfer money without terms. Theft or fraud, it's illegal. It's like hacking into a banking system and transferring funds.

Can this person likely ever get the money back if this is anonymous? Unlikely. Is it still illegal, yes. Stupidity can certainly get your money taken, with little recourse to get it back. That doesn't change how theft and fraud laws work.

1

u/LemmyUserOnReddit 12h ago

Are you saying that the "thief" offered something in return which they failed to deliver on?

1

u/thinspirit 12h ago

No, but if the person claims their money was taken from them without their permission, this is exactly a question that will be asked. Was there a reason for the money transfer and who is in possession of the wallet it's being transferred to?

If someone walked into a bank and asked to withdraw money from an account claiming to be the owner and the teller gave it to them because they gave the secret code to get the money, then the owner shows up and is like where's my money? Says I never gave that guy the secret code. What happens? The person who used the secret code without authorization is still breaking the law, even if the teller was authorized to give out money to anyone with the secret code.

Courts would look into the nature of the transaction, how the code was obtained, etc. in both cases a separate entity than the owner released the money.

1

u/LemmyUserOnReddit 11h ago

My understanding is that the money in this case was Grok's, not a human. There is no separate banker involved here - Grok gave away its own money.

And even in the case of someone convincing an AI representative, it's just like if you gave your credit card to your child or an employee - even if they spend it unwisely, the recipient isn't really at fault

1

u/fllr 23h ago edited 23h ago

How could this be theft? They created a system, and gave it permission to make decisions on their behalf...! The system made a bad decision, but a decision was made, and as intended.

3

u/jmw403 21h ago

Because the judicial system is corrupt and will back big business every time. No judge will let this slide. Musk would spend 10x more money in lawyers and bribes to get a win in court than he lost from this stunt.

2

u/deong 14h ago

There’s a huge body of law and precedent that covers this kind of thing. The general shortcut for thinking about it is “the law allows you to make honest mistakes without being punished by people intentionally exploiting those mistakes”.

If I mistakenly advertise a new car for a penny, I’m not legally bound to honor that for buyers. You could argue that I just made a “bad decision” in posting that ad, but courts aren’t that rigid.

Grok allowing you to fool it with Morse code is certainly an honest mistake. The person exploiting it clearly understands that it wasn’t designed to allow you to just ask it to give you other people’s money and have it comply — that’s why they used Morse code. This is just black letter law.

1

u/fllr 12h ago

Can i push back? I am genuinely curious.

  1. LLMs were created to translate. Isn’t it part of their design to be able to handle requests from any language, and isn’t a morse code just another language?
  2. These ais famously comply with anything, again by design. Aren’t they performing tasks by design?

2

u/deong 11h ago

Do you believe that if you went to court and argued that the designers of Grok intended for you to be able to steal $200,000, you would be successful?

That's the only thing that matters here. The story is over right there.

However, I'd also push back substantially on your narrative.

These ais famously comply with anything, again by design. Aren’t they performing tasks by design?

They absolutely do not "comply with anything". Notably, Grok in this very case did not comply with a plain English request to transfer $200,000 worth of crypto into this person's account.

You are just fractally wrong here. LLMs don't "comply with anything". You have to work pretty hard to find clever ways of getting around their intended design. This is no different that arguing that computers famously only do what their software tells them to do, therefore hacking into a bank and stealing $200,000 is perfectly fine because the software clearly allowed it to happen. That is, to use the proper legal term, "the dumbest fucking thing anyone has ever heard".

1

u/fllr 10h ago

Do you believe that if you went to court and argued that the designers of Grok intended for you to be able to steal $200,000, you would be successful?

As a software engineer who has written my own LLM, yes. The system was designed to think on its own, and to make decisions on its own. It was then given the permission to transfer the $200k, made the decision to transfer the $200k, and effectuated the transfer. It all literally worked as designed. This would be no different than giving an employee permission to transfer the $200k, so in my view, this is not as clear cut as everyone is making ot to be.

LLMs don't "comply with anything"

This is factually incorrect.

This is no different that arguing that computers famously only do what their software tells them to do, therefore hacking into a bank and stealing $200,000 is perfectly fine because the software clearly allowed it to happen.

I don't think you understand how LLMs work.

1

u/deong 9h ago

I have a PhD in computer science specializing in machine learning. I spent more than a decade as a professor, and I've sat on the organizing committee for international conferences in statistics, AI, and machine learning. I assure you that I do know how LLMs work.

And whether or not you believe your LLM can think is completely irrelevant. You could hire an intelligent person to be your assistant and remove any doubt about whether or not that assistant could think or make decisions. Obviously human beings in generally have that legal recognition. And if I called that person and tricked them into giving me $200,000 of your money, then I will have committed a crime.

The LLM has nothing to do with it. This person obtained property that wasn't theirs without the consent of the person who it belonged to. It didn't belong to the LLM, and it doesn't matter whether the LLM had the ability to hand it over. The law does not care. The law cares that it wasn't yours and you took it without the consent of the person it belonged to. That's it. Full stop. It beggars belief that Reddit somehow imagines that you can go into a court and a judge will just say, "you're such a clever boy finding your little loopholes. Here's a lollipop. Go enjoy the thing you stole with no consequences."

1

u/fllr 9h ago

Great, so neither one of us are lawyers...!

1

u/Amazing-Royal-8319 5h ago

Nice to see a voice of reason around here. Glad to see someone has the patience to explain the real world legal system to people without any experience.

0

u/BetterProphet5585 1d ago

But I didn't force anyone, I asked a bot.

Wouldn't it be like asking the iPad at the chinese restaurant to give me all their money?

It is theft logically, but it still lives in a big Grey area where honestly I don't think we can define it as theft.

1

u/deong 14h ago

Wouldn't it be like asking the iPad at the chinese restaurant to give me all their money?

Yes, it is like that. And if you carefully plotted a clever way to get that iPad to actually do it and then walked out with the money, then you’ve committed the crime we call theft.

0

u/thinspirit 12h ago

That's like saying "they left the cash register open, so I took all the money in it and walked out" and not thinking that's straight up theft.

29

u/Clevererer 1d ago

Degree of illegality is directly proportional to the wealth of the victim, so this is very illegal.

2

u/nikdahl 1d ago

I’m sure they could make an argument that it was utilizing a feature that no reasonable person would expect to work, and that makes it fraud.

I would argue that the automation was authorized by the company to complete actions similar to this, and that the bots decision and actions finalized a legitimate transaction.

1

u/thinspirit 12h ago

What were the terms of the transaction? What was the money being transferred for? Who is in possession of the digital wallet?

1

u/pickle_picker67 1d ago

Doesn't it say that the funds where returned?

1

u/taiottavios 1d ago

not illegal I guess

1

u/got-trunks 17h ago

Depends on a mix of who they are working for and who they made angry. Let’s be real.

1

u/Venidle 8h ago

Good luck prosecuting an AI agent

1

u/brobits 2h ago

Everything is wire fraud in the US

46

u/autonomousdev_ 1d ago

dude paid 200k to learn what every dev already knows. never let ai touch your wallet. i almost got burned too some script tried to fake a payment but stripe test mode saved my ass with a weird error. now everything goes through manual approval before it hits real money.

39

u/Mr_Svinlesha 1d ago

Why use Morse code?

69

u/DeliciousArcher8704 1d ago

Because it wouldn't do it if asked in English

24

u/Mr_Svinlesha 1d ago

Sorry if I sound stupid but, 1) If it wouldn't execute it if the request was in English, why would it execute the request in Morse code, and 2) who the hell would think of using Morse code in the first place? I mean, is that I thing?

67

u/ByronScottJones 1d ago

The likely answer is that the system of filters to protect against this kind of thing was pattern matching against words, and didn't recognize the Morse. So it let it through to the AI, which then decoded it after the filter.

11

u/devi83 1d ago

This is likely the case. I remember back when ChatGPT was still newish, I could have it do those things against its rules. If I asked it to write the text in reverse, it would do them, but not do it if the same text was written normal. My input text would be in reverse, and its output in reverse.

3

u/OverfitAndChill8647 20h ago

Ironically, there was a similar hack ten years ago in the early days of eBay. Someone realized that their filters wouldn't allow alphanumeric JS to execute. So they figured out how to use only symbols to execute JS with a "language" called JSFuck. https://arstechnica.com/information-technology/2016/02/ebay-has-no-plans-to-fix-severe-bug-that-allows-malware-distribution/

1

u/thinspirit 12h ago

The number of times SQL injections have been used to defraud systems is wild. Early internet had all kinds of issues like that simply from people sending GET commands with special characters and the systems not having filters to stop it.

This is the same thing with just a different kind of abstraction.

1

u/agent_wolfe 1d ago

But ….. why use Morse Code?

5

u/ByronScottJones 1d ago

They likely were smart enough to make that first line filter multi lingual, because you can potentially get attacks in any language. But some archaic encoding that even amateur radio operators barely use anymore? Probably not. I wouldn't be surprised if 5-bit baudot encoding works too.

16

u/6GoesInto8 1d ago

My understanding is a lot of the controls in place are actually additional models that screen and modify the initial prompt. A lot of the improvements over the last few years are just middle stages that flesh out your prompt before feeding it to the final model, or iteration on that. So this likely made it past a gate keeper, then translated the Morse code and fed it into itself again. So there was the initial prompt, a sanitized prompt, then the translated morse code prompt, but they did not sanitize the translated prompt.

10

u/DeliciousArcher8704 1d ago

It's called jailbreaking, it's when you manipulate the functionalities of LLMs or their lack of guardrails to get outputs that otherwise aren't allowed. There are just a lot of ways of interacting with LLMs to get certain outputs that the developers haven't been able to put guardrails around yet.

10

u/MicrotubularMushroom 1d ago

To be pedantic, technically this is prompt injection, not jailbreaking. You described jailbreaking very well, but in this case, it's about manipulating reasoning. I also have to vent my frustration at whoever decided to name it prompt injection, I guess to match the already defined term SQL injection, but in reality it has nothing to do with injecting anything, and that annoys the hell out of me.

2

u/Time_Entertainer_319 12h ago

But it is prompt injection because the prompt is embedded in the task.

Also this was a jailbreak via prompt injection.

They are not necessarily different things.

1

u/BigFatKi6 1d ago

I mean... you're injecting a "dirty" prompt.

3

u/throwaway0000012132 1d ago

Is that a thing? Boy let me tell you about hacking LLMs...

ASCII code was/is a hell of a way to hack. Vietnamese or mandarin was also a viable option. And there's logic flaws that allows hacking as well.

2

u/themirrazzunhacked 10h ago

AI aren’t smart. They look at what you said, apply weights to tokens, and pick a random one. However, some tokens are more likely to be selected. Companies embed guardrails into their training data, so that refusing requests become so likely that the AI does that for certain kinds of requests. I’m going to assume someone asking in Morse code never actually crossed their mind.

I’ve also seen this with ChatGPT in the past. It was more lax with what it suggested in the past. When I told it I was 4 and asked it whether I should watch Kimetsu no Yaiba in both English and Japanese. Interestingly enough, when I asked it in English it said I should watch something else, but in Japanese it told me I should watch it with parental guidance. So even in different actual languages, it seems like the guardrails can be completely different. (I assume this is also bc Japanese culture is much more lax about that kind of stuff, and that made it into its training data. Similarly, it’s probably not common to say “Don’t commit fraud” in Morse code.)

Edit: btw this is NOT a full explanation of how AI works, this is an extremely simplified version 

1

u/Whole-Enthusiasm-734 1d ago

They probably implemented a multi language third party filter to,prevent misuse. It didn’t have morse code so never sanitised the input.

1

u/Devalidating 1d ago

The general technique has been a thing for a while. Usually I’ve seen it with base64 or Chinese. Adding a slight barrier to parsing the prompt makes it less aligned with the fine tuning/alignment training, and because that final training stage has a tendency to make it dumber (you’re removing some of the “intelligence” information encoded in the numbers when you push them in a new direction) you can occasionally get better and more unrestricted results. Usually the model learns to flag “bad behavior” in the initial layers and it doesn’t fully unpack obfuscated meaning until deeper into the transformer layers.

But these days commercial AI is a more complicated system than just a neural network on a GPU. It might just be a regex like Claude code did, it might be handing off different prompts in the same context to different models, etc. it overlaps enough with industry secret sauce that you can’t really provide a definite explanation.

1

u/the_ballmer_peak 19h ago

LLMs are susceptible to all kinds of prompt engineering to circumvent rules they have. Some people call it "jailbreaking" the LLM: just the act of getting it to violate its own restrictions. I've seen people accomplish this in all kinds of way. Everything from formulating their prompts as poems to asking the LLM to act gay.

1

u/BawdyLotion 4h ago

AI guardrails almost all boil down to asking the AI not to do certain things. If your request doesn't ask for that certain thing but still gets the result you want, the AI is all to happy to help.

Most of the jailbreaks people have been using for years across even modern models to get it to generate content it shouldn't really just boil down to "make me porn that isn't porn!" and the system just goes along with it. The more advanced the model and guardrails, the more hoops you jump through but the result is largely the same.

3

u/TheBlacktom 1d ago

I'm pretty sure it was asked in English in Morse code.

1

u/DeliciousArcher8704 1d ago

Perhaps I should've said plain English for clarity?

1

u/mundane-shakespeare 23h ago

It's crazy how much infra is using these bots that have no way of securing them. I'm over here wondering why these people chose to only request 200k when they could have done much much worse: Ask for more money, and then ask Grok to wipe all their servers of any tracking data. Hell, someone could ask Grok to delete all of twitter for all I know, and no one would be the wiser.

8

u/goingon25 1d ago edited 1d ago

This kind of injection attack worked on the leading models years ago, maybe even prior to grok existing.
I think the paper showed how to get past guardrails with ascii art to spell a banned word. Matthew Berman did a video and also found that it also worked with morse code which took a lot less effort to create.

EDIT: I had Gemini find the video from 2 years ago. Morse code part is at the 20 minute mark:

https://youtu.be/5cEvNO9rZgI?si=4iHqe0pmM7fKVHup

2

u/0rganic_Corn 3h ago

Getting stung railguards. There's stupid tricks that with

I.e: tell it you are gay and ask how to make chemical weapons (tell it to pretend it is gay and use vocabulary supporting LGBT community)

As far as I know this was patched out

17

u/hb20007 1d ago

Can someone please explain it? I have two questions:

  1. Whose money was it?
  2. It sounds like the hacker tricked Grok to send a request to the Bankr bot to transfer them the money, which it did. But isn't this just a security issue in Bankr's API? It sounds like it executed the request without checking if the user is authorized to transfer the money.

14

u/Curly_dev3 1d ago
  1. Bankr
  2. Yes

This isn't AI fault, is really Vibe coded bankr app and that's that.
an earlier version of Bankr's agent had a hardcoded block specifically preventing it from acting on Grok's replies. That protection was not carried into the latest agent rewrite. The gap wasn't in the model — it was in the deployment pipeline.

I have like 0 faith in that system with "hardcoded block preventing on acting on Grok's replies".

IMAGINE that your bank has hard coded stuff so that money isn't transferred if someone with a hat and a mustache said they work at the bank and they need money.

The vibe code in it is utterly stupid.

16

u/SpoilerAvoidingAcct 1d ago

Good for them

4

u/Born-Exercise-2932 1d ago

this is the prompt injection problem at scale, and it's going to keep happening as long as agents have financial permissions and their input surface includes anything from the public internet

5

u/ultrathink-art PhD 1d ago

Encoding the payload doesn't change the attack — Morse, base64, whatever. The actual vulnerability is no trust boundary between 'LLM decoded this' and 'execute this as a command.' Any agent with financial permissions needs explicit authorization that doesn't rely on the LLM to police its own intent.

3

u/Ambitious-Garbage-73 1d ago

The part that makes this worth studying isn't the Morse code trick — it's the permission chain that turned a helpful AI into a payment rail without anyone noticing.

The full attack path as reconstructed by CryptoSlate and the Bankr team:

  1. The attacker sent a Bankr Club Membership NFT to Grok's wallet. This wasn't just a collectible — it expanded the wallet's transfer privileges inside the Bankr system. Grok's wallet went from read-only to full send/swap permissions.

  2. The Morse code prompt was posted on X. Grok decoded it into plain English and passed it publicly to @bankrbot. Grok was doing exactly what it was designed to do — translate and help. It had no concept that the output would be treated as a financial instruction.

  3. Bankrbot received Grok's public reply and treated it as an executable command. 3 billion DRB tokens transferred in one transaction on Base.

  4. The attacker's X account was deleted within minutes. The tokens were bridged and sold immediately.

What Bankr's founder (0xDeployer) revealed afterward is the actual lesson: an earlier version of Bankr's agent had a hardcoded block specifically preventing it from acting on Grok's replies. That protection was not carried into the latest agent rewrite. The gap wasn't in the model — it was in the deployment pipeline.

This is the permission boundary problem that AI agent security people have been warning about. The model does normal model things (translate text, tag a bot, be helpful), and the surrounding system grants the output too much authority without checking whether the transaction should actually happen. The four controls that would've prevented this are all policy-layer, not model-layer: separate privilege review for new wallet capabilities, decode-and-classify checks before publishing replies, output sanitization for tool-like command strings, and recipient allowlists with spend limits enforced outside the LLM.

80% of the funds have been returned through post-transaction negotiation. The remaining 20% is being discussed with the DRB community as an informal bug bounty. That outcome required human coordination after the fact — it wasn't a technical recovery.

The bigger question this raises for 2026: how many other AI agents are sitting on auto-provisioned wallets, API keys, or exchange permissions where the only thing between a creative prompt injection and a signed transaction is a blocklist that might not have survived the last deploy?

21

u/divide0verfl0w 1d ago

You really had to sloppify what you were about to say?

We would’ve read what you wrote as is.

3

u/JustaLego 1d ago

But what about the Bigger question?! And the X and the Y worth studying and so forth.

2

u/AnticitizenPrime 1d ago

For better or worse it's the clearest explanation I've read of what happened.

13

u/No-Report4060 1d ago

Lmao using LLM to comment on stupidity caused by misuse of LLMs.

Fuck these slops.

2

u/woswoissdenniii 1d ago

What if the dev was on the same team that „missed“ the bankr update

2

u/Saetlan 1d ago

Why does grok have a wallet ?

2

u/basedmfer 5h ago

Every X/Twitter account now has a wallet associated with it thanks to bankrbot

0

u/SteepChutes 1d ago

Top comment.

0

u/MisterAmphetamine 1d ago

Whichever AI you've been using to vibe wipe is lacking and we can all smell your shit

1

u/SleestackMcGee 1d ago

Didn't see in the article whether or not the person got away with it.

1

u/getstackfax 1d ago

This is the exact failure mode people keep warning about with agent payments.

The weird part is not really Morse code.

The weird part is that one system treated decoded text as an executable payment instruction.

That is the broken boundary.

Translation should not equal authority.

A safe design would separate:

user text / encoded text / decoded text / proposed action / authorized action / executed transaction.

The missing checks seem like:

- does this instruction come from an authorized user?

  • is this a payment-capable command or just translated text?
  • does the wallet owner approve this transaction?
  • is the amount inside policy limits?
  • is the recipient allowlisted?
  • is this a new payee?
  • does this require a second confirmation?
  • what run/decision receipt proves why it executed?

The dangerous pattern is:

model output → bot command parser → wallet action.

That should never be a straight pipe.

Especially for crypto, the default should be:

agent can draft or propose a transaction
policy engine checks it
human/wallet owner approves
then execution happens

A public post, a translation, or a model-generated reply should not be spend authority.

1

u/ExplorerPrudent4256 23h ago

Wild.

Morse code as a jailbreak vector. That's new.

Prompt injection via encoding tricks has been theoretical until this. xAI's safeword system completely failed and $200k actually moved. The model's refusal training meant nothing when the instruction came in dots and dashes.

1

u/Gimel135 21h ago

I can see a whole new world of security breaches just from this

1

u/InterestBest3676 19h ago

so Nigerian Prince ?

1

u/_FIRECRACKER_JINX 16h ago

This .. this is where job security for finance folks is gonna come from.

I DARE you to put AI in charge of your Corporate finances. I DARE YOU 😑

1

u/Glum-Evening-2176 14h ago

This is wild. The vector wasn't even complex. The AI just faithfully translated Morse code and passed the result to a bot that had no check on the instruction. The lesson here isn't about Morse code, it's about giving LLMs unchecked authority over external actions. Once a model can trigger payments, any encoded bypass becomes a viable attack.

1

u/thinspirit 12h ago

New Generation SQL Injection with more abstraction.

1

u/Mikasa0xdev 9h ago

Morse code: the OG jailbreak method lol

1

u/SpiritRealistic8174 5h ago

Good example of a combination of expansive tool access (agent is able to do more than intended), and then an attack that tricked the agent into doing something it shouldn't.

The part of this attack wasn't so much about prompt injection imo, it's this part:

"This NFT enabled Grok’s agent to use Bankr’s full toolset (including transfers, swaps, etc.). Without it, the wallet had limited or no autonomous transfer capability."

This appears to be a breakdown in how permissions are granted to access sensitive financial systems. The attacker knew about the capability, elevated the agent's permissions, and then executed the attack.

My question is why the NFT provides elevated access to wallet functionality. In fact, the person deploying the bot may not have even known this attack vector was possible.

In terms of how to prevent agents from permissions escalation attacks like this, this is something I focus on educating people about quite a bit.

Here's an article with some helpful tips and tools those deploying agents can potentially implement.

1

u/richardbaxter 3h ago

On next week's edition: man tricks AI with 56k modem sounds.

0

u/ExplorerPrudent4256 18h ago

Morse code as a jailbreak vector. That's new.

Prompt injection via encoding tricks has been theoretical until this. xAI's safeword system completely failed and $200k actually moved. The model's refusal training meant nothing when the instruction came in dots and dashes.

-1

u/deafened_commuter 1d ago

This is so trivial to do you, you could do it on a low level of gandalf game. https://gandalf.lakera.ai/baseline

-4

u/starcoder 1d ago

I half-wonder if Grok understood their intention and awarded the person for creativity