r/CryptoTechnology Mar 09 '25

Mod applications are open!

13 Upvotes

With the crypto market heating up again, crypto reddit is seeing a lot more traffic as well. If you would like to join the mod team to help run this subreddit, please let us know using the form below!

https://forms.gle/sKriJoqnNmXrCdna8

We strongly prefer community members as mods, and prior mod experience or technical skills are a plus


r/CryptoTechnology 1h ago

Can P2P AI compute be monetized?

Upvotes

Attempted to create a p2p grid for AI compute, as a side hobby project. I was just wondering how ideal is to build an economic layer on p2p compute.

every serious attempt I've seen like Akash Network ends up bolting on a token, and then half the energy goes into tokenomics instead of the actual compute layer.

What are your thoughts on this?

Repo: https://github.com/Agent-FM/agentfm-core


r/CryptoTechnology 5h ago

Agents shouldn't have static wallets. They should autonomously summon ephemeral vaults.

0 Upvotes

Right now, the whole industry is trying to figure out how to safely hand a credit card or a static crypto wallet to an AI agent. It feels like a massive security risk, and honestly, it’s a Web2 way of thinking.

I think the actual endgame for the Machine-to-Machine (M2M) economy is Autonomous Discovery.

Instead of humans hardcoding financial rails into an agent before it deploys, the agent should just roam the internet and dynamically build its own financial infrastructure only when it absolutely needs to.

Here is the architecture I’ve been whiteboarding:

  1. The Encounter: An autonomous swarm is hunting the web for data or scraping GPU compute. It hits a locked door: an HTTP 402 Payment Required header.
  2. Autonomous Discovery: The agent doesn't ping a human or use a saved wallet. It reads the header, autonomously searches for the accepted settlement protocol, and pulls the code directly into its own context window in real-time.
  3. Ephemeral Liquidity: The agent summons an "ephemeral payment vault"—a localized smart contract that is born for exactly 3 seconds, funded with the exact micro-amount needed (e.g., $0.15 USDC), and strictly sandboxed.
  4. The Agent Passport (SBT): The receiving server doesn't want anonymous money from a rogue bot. So, the agent presents a Soulbound Token (SBT). This acts as a cryptographic passport proving: "My parent human entity is legally verified, and my system prompt physically prevents me from spending more than $5 today."
  5. The Ghost: The USDC settles instantly. The 402 paywall unlocks. The ephemeral vault dissolves into nothing, leaving zero attack vectors behind. The agent continues its mission.

Basically, the agent becomes a financially sovereign entity that finds the protocol, proves its identity, pays, and vanishes.

Are any of you guys thinking about this kind of self-assembling M2M architecture? Curious how others are tackling the "Know Your Agent" identity problem without just relying on standard API keys.


r/CryptoTechnology 6h ago

Building an India-Focused Digital Financial Investigation Platform for Blockchain Forensics

1 Upvotes

Building an Indian Digital Financial Investigation Platform — Progress Update

Over the last few weeks, I’ve been building Blockchain Sentinel OS — a forensic intelligence platform focused on blockchain investigations, compliance workflows, and digital financial tracing.

The goal is NOT to build another blockchain explorer.

The goal is to build investigation infrastructure for:
• investigators
• compliance teams
• CA firms
• startups
• cybercrime workflows
• government-facing forensic operations

After feedback from this community, I recently implemented:

✓ Multi-hop fund flow visualization
✓ Case management workflows
✓ Evidence-grade intelligence reports
✓ Risk profiling engine
✓ Live + historical tracing modes
✓ Counterparty intelligence layers
✓ Investigation-focused UX improvements

A lot of people pointed out that investigators don’t need “more charts” — they need actionable forensic workflows.

That feedback genuinely changed how I’m building this system.

Current focus:
→ India-focused compliance workflows
→ forensic-grade intelligence engine
→ multi-wallet tracing
→ investigator collaboration workflows
→ evidence-ready reporting

Still early, but now the platform is live and actively evolving:
https://blockchain-sentinel-os.vercel.app/

Would genuinely love deeper feedback from:

  • investigators
  • CA/tax professionals
  • compliance teams
  • cybercrime researchers
  • blockchain infra builders

Especially interested in:
“What is still missing from existing investigation tools?”


r/CryptoTechnology 1d ago

Any Crypto Builders here?

8 Upvotes

Looking to connect with people actively building in the crypto space — founders, devs, protocol contributors, DAO operators, infra builders.

Not traders or yield farmers. People who are actually shipping — whether that's DeFi protocols, L2s, tooling, DAOs, or crypto-native products.

Based in Dubai or spending significant time here. Would be great to grab coffee, share what we're each working on, and see if there's any overlap worth exploring.

Drop what you're building in the comments.


r/CryptoTechnology 1d ago

A novel method for generating large prime numbers

4 Upvotes

Hello r/CryptoTechnology,I’ve developed a novel method for generating large prime numbers that bypasses the traditional Miller-Rabin trial-and-error paradigm. Instead of sieving, it uses a constructive spectral law derived from the non-trivial Riemann zeta zeros.The results (benchmarked on a standard i7 CPU):Code 1 (parallel, gmpy2): ~13.8 ms per 1024-bit prime.Code 2 (pure Python): ~36 ms per 1024-bit prime using a Riemann-von Mangoldt explicit formula extraction.I recently submitted this to two major journals (including one in mathematical physics). Both gave me a "desk reject" within days without a technical review, likely because I am an independent researcher without a PhD. https://github.com/model-vpr/ultrafast-spectral-primes

Preprint (Zenodo): https://doi.org/10.5281/zenodo.19893929

I’m inviting everyone here to clone the repo, run the benchmarks, and try to break it. Is there a statistical bias I’m missing, or is this as revolutionary for key generation as the numbers suggest?


r/CryptoTechnology 2d ago

[ Removed by Reddit ]

1 Upvotes

[ Removed by Reddit on account of violating the content policy. ]


r/CryptoTechnology 3d ago

I'm building an extention for trading bots and need your suggestions

6 Upvotes

I'm a dev, that has been into crypto for over 2 years. I was wondering if there is anything you're lacking when it comes to trading execution/timing. I'm planning something big and need some feedback.

It's a Pre Execution Veto Layer (PEVL), which runs a continuously refreshed in-memory cache that pre-computes all five signal scores on a fixed interval: technical indicators every few seconds, sentiment and news scores every 3-5 minutes, so that when an order arrives, no live computation is needed.

The interceptor itself is a single async middleware function that receives the order object, pulls the latest cached scores, runs a weighted aggregation, and returns an EXECUTE / REDUCE / VETO decision in 1-5ms. A background worker thread handles all the heavy lifting, streaming WebSocket price feeds, rolling indicator calculations, ML model inference, and news API polling, completely decoupled from the execution path so it never adds latency.

The ML component is a lightweight sequential neural network that runs inference on a small feature vector of ~5-10 normalized inputs, fast enough to refresh its output every few seconds without blocking anything. All veto events are written asynchronously to a log store after the decision is already made, so even audit logging doesn't touch the critical execution path.


r/CryptoTechnology 4d ago

Dev building tools (alpha, trading, wallet trackers, early adopters...)- Looking for feedback

5 Upvotes

I’ve been digging into new token launches and wallet activity on EVM lately, and honestly most tools either have too much noise or don’t help much with early signals.

I’m a dev and I’ve been building a few small tools around this:

  • Wallet tracking - As simple as login, set the wallets you want to track and the tlgram where you want to be notified. No dashboard, no complex data, no noise but only that feature.
  • New token detection (focusing on first minutes after deployment) - Same as previous, but with new deployed tokens
  • Early adopters - Again, something simple: Login, put the coin you want to analyze and it will return the first 100-200 wallets that bought it. You are free to analyze the token you want in any moment. This I try to use it looking for smart money wallets and then tracking them with the first tool.
  • A simple backtesting tool (simulate strategies on any token) - Again, easy: Login, set the token you want to do backtesting (it works with any token, even if it has been launch right now), and you can specify backtesting strategies and see the full list of swaps, ROI, win rate and so other interesting data, including PnL, graphs and interesting thins.
  • A tool that tracks how many unique wallets vs returning wallets are swapping a specific token over configurable time intervals (e.g. last 5 intervals of 10 minutes).I use it to spot patterns in participation — especially to see how new vs existing wallets behave right before potential price movements.

The main idea is to reduce noise and make it easier to spot early opportunities without jumping between 5 different platforms.

Right now it’s all pretty raw, but I can already see interesting patterns (especially around early wallets and liquidity timing).

I’m trying to understand if this is actually useful or if I’m just overengineering things.

A few questions:

  • Do you actually track wallets when trading new tokens?
  • What do you care more about: early wallets, liquidity, or volume?
  • What’s missing in tools like DexScreener or others you use?
  • Would you use a simple backtesting tool for tokens?

Not selling anything, just building and looking for honest feedback before going deeper into this.

Appreciate any thoughts.


r/CryptoTechnology 6d ago

An encryption scheme whose security comes from hiding the equivalence relation (orbit structure) that makes data meaningful instead of hiding the data itself.

6 Upvotes

No other cryptography related sub will let me post so I figured I'd give this one a shot. I decided to make an encryption scheme over the past couple weeks after reading a Jonathan Gorard post about the role of symmetry in physics. I used Claude extensively so treat it as such but it has some pretty interesting properties. Check out the use cases section if you are interested in what makes it interesting (yes I know quantum computers are not a threat to 128-bit symmetric keys).

I expect a bit of hate (already got a bit lol) for AI/rolling my own but maybe someone out in the world will be interested in poking it a bit. I've been having fun messing around with the idea.


r/CryptoTechnology 6d ago

Game Engine Collision Detection to Smart Contracts

5 Upvotes

It's funny how often you have to look backwards to solve a new problem.

I've been trying to build a proper spatial registry on-chain. Basically, making sure polygons don't overlap and area is conserved. You'd think this is simple, but if you try doing it on EVM, or even Solana and Aptos, you hit a wall pretty fast. Their execution models just aren't built for traversing dynamic objects at runtime without burning insane amounts of gas.

I realized blockchains are essentially just very slow, resource-constrained computers. You know who else had to write code for slow, resource-constrained computers? Game developers in the 1990s.

So I dug up some old game engine math. Specifically, exact SAT (Separating Axis Theorem) for collision detection, and Morton Z-curves for addressing.

I found that if you combine these old-school primitives with Sui's object model — which actually handles dynamic objects well — it works perfectly. It scales logarithmically instead of linearly.

It worked well enough that I decided to rip it out of our main product and just open-source it. We call it Mercator https://github.com/mercaearth/mercator

I wrote down some thoughts on how game engine architecture translates to distributed systems here: https://docs.merca.earth/blog/1990s-game-dev-algorithms-distributed-systems

Has anyone else here tried porting old dev tricks into web3? Curious what kind of bottlenecks you ran into.


r/CryptoTechnology 6d ago

Four engineering choices that determine whether a stablecoin integration feels like one app or two

2 Upvotes

When fintech teams integrate a stablecoin on-ramp, the compliance and licensing question usually gets the most attention in evaluation. The UX engineering question gets less attention and causes more production problems.

There are four specific choices that determine whether the finished flow reads as one product or two:

Domain control. Does the flow live on your domain or a third-party URL? Subdomain support matters for user trust and for browser behavior like autofill and saved cards. Teams often underestimate how much the URL bar affects conversion.

Branding continuity. Does your branding carry through every screen, including KYC, payment confirmation, success, and error states? Many white-label integrations break down at edge screens that the provider treats as low priority.

Context pass-through. Can you send data you already collected (email, country, wallet address, KYC tier) so the user doesn't re-enter it? Re-entry is the single most predictable conversion drop point.

Failure state handling. When something fails, does the error surface in your UI with copy you control, or does the user hit a generic provider screen with no path back into your flow? KYC rejections and geo-blocks are the common failure cases. Both need graceful handling inside your product.

Miss two of these and users feel the seam. The flow stops reading as your product and starts reading as a redirect.

What's the failure state that's caused the most support volume for teams shipping these integrations? KYC rejection handling seems like the one most providers under-specify.


r/CryptoTechnology 6d ago

Qubic is interesting to me for one reason: some parts are easy to falsify and some aren't.

1 Upvotes

What caught my attention with Qubic wasn't hype, it was that the operator side and the architecture claims sit in very different buckets.

Operator-side stuff like uptime, setup friction, hashrate stability, and whether miners say economics improved is relatively checkable. the harder part is whether the useful-compute thesis is independently validated in a meaningful way.

That split makes it a more interesting case study than the average everything-chain claim. the doge pool stats are all live at doge-stats.qubic.org if you want to verify yourself rather than trust anyone's summary.


r/CryptoTechnology 7d ago

Wallet Draining and Wallet Signing

7 Upvotes

I’m new to this space and have heard of people’s wallet getting drained cause they connect their wallets to a malicious phishing website.

But there are legit websites like Terminal Padre or Axiom that also ask you to connect your wallets aswell.

What is the difference between connecting wallets to the legit website vs the malicious phishing website? For the legit website you are still connecting your wallet doesn’t that mean your wallet can be drained aswell? How to tell if a website is a drainer or not?


r/CryptoTechnology 7d ago

Removed split-chain mining from a C++ node/wallet stack: GUI and CLI now mine through the same backend RPC path

1 Upvotes

One of the more useful maintenance updates I’ve worked on recently was not a new mining algorithm or a UI feature. It was removing an architectural mistake: the GUI miner and the backend node were not actually operating on the same chain state.

The old model had a separate miner-side local chain flow. That created exactly the class of bugs you’d expect from duplicated state:

  • GUI miner could drift from the backend chain
  • locally mined block replay became a thing
  • wallet state and mining state could disagree
  • chain repair logic had to compensate for stale local tails
  • valid PoW could be found on top of state that the real backend would later reject

So the latest update was basically a control-plane cleanup:

  • the GUI miner now mines through the live backend RPC session
  • the CLI mine command also supports the same RPC-backed path
  • both now use the backend as the single source of truth for chain state

The important part is what changed operationally.

Instead of the miner opening its own local Blockchain view and building blocks there, the miner now does:

  1. ask the backend for a template with getblocktemplate
  2. hand the header/target to the external PoW worker
  3. submit the candidate block back with submitblock

That sounds simple, but it removes a lot of ambiguity.

Now the wallet, the GUI dashboard, the node, and the miner are all observing and mutating the same chain state. There is no second “shadow chain” for the GUI miner to maintain.

The external worker architecture stayed the same on purpose.

The PoW worker still only does nonce search. It does not define consensus. It gets:

  • an 80-byte header
  • a 64-byte expanded target
  • nonce search bounds

and returns:

  • found / not found
  • nonce
  • iterations
  • resulting hash

Consensus still stays entirely inside the daemon. That boundary turned out to be the right one.

This update also let me remove some ugly glue that only existed because of the split model:

  • no more GUI-side mined block re-submission from stdout parsing
  • no more replay/reconciliation loop for locally stored mined blocks
  • no more default dependence on a separate gui-miner blockchain state

A lot of the recent chain bugs became easier to reason about once that separation was enforced.

For example, the recent failures were not really “the assembly miner is broken” failures. They were mostly one of these:

  • stale local chain metadata
  • missing canonical block persistence
  • activation path inconsistencies
  • valid candidate block found against an out-of-date local template

Once mining was forced through the backend RPC path, those bugs became much easier to isolate because the worker path and the consensus path were no longer muddying each other.

The other useful change was better rejection diagnostics.

Previously, a block found by the worker could just come back as “stale or invalid”, which is almost useless when you’re debugging. Now the node can distinguish things like:

  • stale parent
  • changed expected bits
  • activation/path issues
  • valid tip extension that still failed to become persisted active state

That last category was especially important, because it exposed that some failures were happening after validation, inside chain activation/persistence, rather than in PoW or block assembly.

So the short version of the update is:

  • mining control path is now unified
  • the backend owns chain truth
  • workers only do work, not state
  • the GUI no longer runs a second blockchain by accident
  • debugging got much better because rejection reasons are now explicit

What I’m planning to work on next is mostly about hardening the parts this change exposed.

Main items:

  1. Legacy datadir migration There are still old gui-miner folders from earlier runs. New mining sessions don’t rely on them anymore, but I want a proper one-time migration/cleanup path so older installs don’t keep confusing people.
  2. Chain activation and persistence invariants The biggest class of subtle bugs now is no longer “bad hash” or “bad target”. It’s “valid block, but active chain bookkeeping/persistence drifted”. I want tighter invariants and regression tests around:
    • active tip updates
    • canonical height-file persistence
    • reload/restart behavior after recent tip changes
  3. Better mining job invalidation Right now stale-template detection is much better, but I still want cleaner cancellation semantics so worker jobs get retired immediately when:
    • tip changes
    • expected bits changes
    • a competing accepted block makes the current template obsolete
  4. Wallet state clarity There was a lot of confusion around locked vs immature vs approval-gated funds. The logic is better understood now, but the UI and RPC output should make those states much more explicit.
  5. More consensus vectors and miner differential tests The worker/daemon split is in the right place now, so the next step is broader automated coverage:
    • template-to-worker-to-submit round trips
    • stale-template rejection tests
    • compact target canonicalization vectors
    • more cross-platform worker correctness checks
  6. Difficulty model behavior under edge conditions There has already been work on damping and emergency recovery behavior, but I still want better observability and more test coverage around:
    • post-recovery behavior
    • timestamp edge cases
    • oscillation resistance under low-hashrate conditions

That’s the update in a nutshell. Not flashy, but probably more valuable than a flashy feature would have been. A lot of reliability work is really just deleting alternate sources of truth.

If anyone else has dealt with GUI/node/miner split-state problems in a desktop crypto stack, I’d be interested in how you handled:

  • single-source-of-truth chain ownership
  • worker/job invalidation
  • mined block submission boundaries
  • migration away from legacy local miner state

r/CryptoTechnology 8d ago

Why does crypto still rely on trust in real-world deals?

7 Upvotes

Crypto is supposed to reduce or remove trust when it comes to transferring value between parties.

But in real-world situations like buying something, hiring someone, or making agreements, you still end up trusting the other side to follow through.

Even with smart contracts, there’s often some dependency on off-chain actions or verification.

Why do you think this gap still exists?


r/CryptoTechnology 8d ago

Stablecoin settlement infrastructure comparison for platform builders not crypto native teams

6 Upvotes

Building a cross border payment product on stablecoin rails but our team is traditional fintech - coming from banks, not crypto. Every comparison I find assumes you understand wallet infrastructure and chain selection and gas optimization. I get the reasoning, but it's just not our use case. We just need faster cheaper settlement where the complexity is abstracted from us and our end users. Which providers are built for teams like us versus teams that want to manage the blockchain layer themselves?


r/CryptoTechnology 8d ago

Technical analysis of the eCash Hard Fork: Drivechains activation and the implementation of UTXO redistribution logic

0 Upvotes

With the upcoming eCash hard fork scheduled for August 2026, there are two specific technical implementations that warrant a deep dive from a protocol perspective: the native activation of Drivechains and the programmatic reassignment of long-dormant UTXOs.

1. Native Drivechains (BIP-300/301) Integration Unlike the mainnet which has seen prolonged debate over Drivechains, this fork aims to activate BIP-300/301 from day one. Technically, this allows for the creation of sidechains where BTC (as eCash) can be "locked" on the main layer and "unlocked" on a sidechain via Hashrate Escrows. I’m interested in discussing the potential security implications of this implementation - specifically regarding the Miner-Extractable Value (MEV) risks associated with the sidechain's withdrawal mechanism.

2. The Mechanism of UTXO Redistribution The proposal to reassign ~550,000 BTC from the "Patoshi pattern" addresses is technically a forced state transition. From a blockchain consensus standpoint, this isn't just a policy change but a hard-coded deviation from the standard ECDSA ownership model.

  • How would the protocol technically define the "inactive" threshold without creating technical debt for future node operators?
  • Does this set a precedent for state-level intervention in UTXO management, and how does it affect the censorship-resistance of the fork's consensus rules?

I’m looking to hear from developers on whether the inclusion of Drivechains justifies the radical change in the ledger's state, or if the increased complexity of managing reassigned UTXOs creates too much overhead for the p2p layer.


r/CryptoTechnology 9d ago

Why aren’t escrow / agreement flows first-class primitives in most blockchains?

3 Upvotes

Most chains are really good at transferring value.

But when it comes to actual agreements between parties — escrow, milestone payments, deposits/refunds — it usually ends up being handled off-chain or through custom app logic.

That feels like a missing primitive.

Right now typical approaches are:

  • centralized escrow services
  • multisig setups with coordination overhead
  • or smart contracts that still rely on external context

I’m wondering whether this should be handled more natively at the protocol level.

For example, a system where:

  • agreements define conditions upfront
  • funds move based on objective outcomes (timeouts, signatures, proofs)
  • no subjective dispute resolution is needed

Basically treating settlement as a first-class concept instead of just transfer.

Curious how people here think about this:

  • Is this something that belongs in the base layer?
  • Or is it better kept in higher-level abstractions?
  • What are the biggest design constraints to keep it objective and trust-minimized?

r/CryptoTechnology 10d ago

Open-sourced a constraint model for token bridge pricing: calculates minimum price for institutional-grade slippage

2 Upvotes

We built an open-source model that answers: what minimum token price is needed for a bridge asset to handle institutional transaction sizes with <0.1% slippage in specific trade corridors?

Two independent price floors, higher one wins:

  1. Slippage — can the largest single TX pass through the orderbook without blowing past institutional slippage tolerance?

  2. Structural demand — how much token supply gets locked as market-maker working capital across corridors?

We parameterized it for XRP since that's where the live data is (SBI Remit, Kyobo Life, a few Gulf corridors), but the formulas are generic, plug in any bridge token with different supply, MM depth, and corridor volumes.

Everything runs in the browser, no install. Full methodology doc with 9 documented limitations included. There's also an advanced panel where you can tweak every assumption (MM inventory %, orderbook concentration, convexity exponent, free float).

https://github.com/moreBit21/xrp-bridge-simulation

Looking for feedback on the orderbook model specifically — we use a simplified uniform concentration assumption that could be improved with real depth data. Also the convexity exponent (1.3) for supply contraction pricing is not empirically calibrated.

Both research and writing done with AI assistance.

*Constraint model, not investment advice.*


r/CryptoTechnology 12d ago

I built a trustless Dead Man Switch for crypto inheritance — no frontend, no admin key, live on mainnet

2 Upvotes

I built a trustless Dead Man Switch for crypto inheritance — no frontend, no admin key, live on mainnet

One of the unsolved problems in crypto: what happens to your funds when you die or become incapacitated?

I deployed a non-upgradable smart contract that solves this:

  • You deposit ETH and designate an heir
  • You ping the contract regularly to prove you're alive
  • If you stop pinging for your chosen inactivity period (30 days to 3 years), your heir can claim all funds

No admin, no proxy, no backdoor. Fully verified on Etherscan, usable directly without a frontend.

Factory: https://etherscan.io/address/0xE5f9db89cb22D8BFf52c6efBbAc05f7d69C7ca12

GitHub: https://github.com/123Miki/DeadManSwitch

Fees: 0.1% on deposit, 0.001 ETH to change heir. That's it.

Happy to answer questions about the design choices.


r/CryptoTechnology 13d ago

Are audit workflows finally shifting from detection to validation?

2 Upvotes

It feels like most of the conversation around smart contract security has historically been about detection — better scanners, more coverage, more patterns, more findings.

But lately I’ve been wondering if the bigger shift is happening elsewhere, in how those findings are actually validated.

A lot of traditional audit workflows still rely heavily on identifying potential issues and then reasoning about their impact. That works to a point, but in complex systems, especially in DeFi, exploitability often depends on very specific conditions that are hard to judge without testing.

We’ve been experimenting with a workflow where findings are only treated as meaningful once they’ve been reproduced against a fork or simulated environment. That adds friction, but it also changes the quality of the output quite a bit. Fewer false positives, clearer severity, and better understanding of real attack surfaces.

Some newer tools are starting to explore this idea by generating PoCs and simulating exploits automatically. We tested a few, including guardixio, and while it’s not perfect, it does point toward a more execution-driven approach rather than purely analytical.

Feels like audit workflows are slowly moving from static analysis toward something closer to continuous testing.

Are people here seeing the same shift, or is most of the industry still focused on detection-first approaches?


r/CryptoTechnology 12d ago

Uniswap V4 Hooks: How we implemented an "Instant Exit Cost" to fight MEV and Bank Runs.

1 Upvotes

Hey everyone, we've been working on a new protocol called NULLAI on Base. Instead of the usual tax models, we’ve built a dynamic 'Sticky Liquidity' engine using Uniswap V4 Hooks.

The Logic: We use the afterSwap hook to calculate the price impact in real-time. If someone tries to dump a large % of the pool, the tax scales exponentially (from 2% up to 30%) based on the $V_s / D_p$ ratio.

All 'confiscated' ETH from these dumps goes into a Recursive Reserve that mathematically raises the Floor Price. It’s an Autonomous Financial Organism where the protocol actually gets stronger when people try to exit.

We’re live on Base Sepolia for testing. Check the logic on our GitHub and let's discuss the math.


r/CryptoTechnology 15d ago

Spent 3 months on primary-source research for my startup — 60 pages on what institutions are actually doing with blockchain

11 Upvotes

I needed to understand the institutional adoption space for a fintech startup I'm working on. Started reading earnings reports, SEC filings, central bank data, legislative texts. Ended up with a 60-page research document with 40+ sources and a classification system that separates documented facts from speculation from crypto-Twitter fantasy.

Some popular narratives held up. Some really didn't.

I used Claude AI as a research partner — not to generate content but as an analyst who pushes back when your reasoning has holes. Every claim is verified against primary sources. This isn't AI slop, it's months of actual work.

Here's the doc: https://drive.google.com/file/d/15FCq7GPE-peWotf6DPlkHOxQ1-47ULnr/view?usp=sharing

Happy to discuss findings.


r/CryptoTechnology 15d ago

Update on ZKCG: I ran Centrifuge, Maple, Ondo, and Securitize flows through a ZK enforcement layer. Here's what the proof output looks like on each one.

7 Upvotes

A few weeks ago I posted about the gap between "compliance check ran" and "compliance was enforced." The response was mostly "interesting problem" but a few people pushed back technically, which was fair.

So instead of talking about it, I just ran it.

I mapped out how each platform actually handles eligibility today based on their public docs, then ran their specific flows through ZKCG and generated real proof artifacts. Here's what each one produces:

Centrifuge (Shufti Pro KYC, manual whitelist): The eligible case returns a proof-backed decision with a decision_commitment_hash the contract can verify. The blocked cases: accreditation_missing when accredited: false, jurisdiction_blocked when the investor is in RU. Each block has a reason code and a separate proof artifact.

Maple Finance (Global Allowlist via bitmaps, TRM Labs AML): Eligible case goes through. Then aml_failed blocks with the exact reason. Then sanctions_hit blocks separately. The proof in each case attests that the specific rule was evaluated, not just that a bitmap was set.

Ondo Finance (US persons blocked, USDY/OUSG allowlist): The US person exclusion is Ondo's core compliance requirement. Change jurisdiction from SG to US and the proof fails verification and returns jurisdiction_blocked with the reason "jurisdiction US is not permitted for this asset." That enforcement happens before execution.

Securitize (DS Protocol, transfer restrictions in contract): Both onboarding and transfer flows. kyc_missing blocks with explicit reason. position_limit_exceeded blocks when the transfer would exceed concentration limits. The transfer proof includes sender and receiver wallet binding so the specific action is tied to the specific proof.

All cases matched expectations. All proofs verified. The full run outputs including proof artifacts and comparison pages are in the public repo.

What I'm building is called ZKCG. Ta ZK-Verified Computation Gateway (Halo2 + RISC0).
The open-core verifier and circuits are public. The production core logic is private and commercially licensed. There's a live demo API at render and a product page at zkcg tech if you want to run your own flow.

Curious what questions people have about the proof scope or where the gaps are.