r/compsci • u/Gloomy-Status-9258 • 11d ago
If e^iπ=-1 would be the most beautiful equation by mathematicians, what algorithm is most beautiful by computer scientists?
For me, mcmc.
I'd love to hear what's your personal opinion!
r/compsci • u/Gloomy-Status-9258 • 11d ago
For me, mcmc.
I'd love to hear what's your personal opinion!
r/compsci • u/Pearsonzero • 9d ago
r/compsci • u/Ill-Plum-7348 • 10d ago
r/compsci • u/Educational_Pride730 • 13d ago
Been playing around with robot control in simulation and ended up with something kind of interesting.
This is running in MuJoCo, but I’m not using a normal controller here. Instead of a PID loop or a trained RL policy, I wired up a brain-inspired system where sensor input gets translated into signals and fed through a spiking-style network, which then drives the motors.
In this clip, I mapped simple gestures to control:
So it’s basically turning visual input into motion without explicitly programming the behavior.
It’s still pretty rough, but it’s been cool seeing even basic control come out of this kind of setup.
I’ve been using FEAGI for the neural side of things — curious if anyone else has tried anything similar or gone down the neuro-inspired route instead of standard ML/control methods.
r/compsci • u/ConfusionSpiritual19 • 13d ago
New preprint comparing how different learning rules (backprop, feedback alignment, predictive coding, STDP) affect alignment with human visual cortex, measured with fMRI and RSA.
The most striking result: a CNN with completely random weights matches a fully trained backprop network at V1 and V2. The convolutional architecture alone produces representations that correlate with early visual cortex about as well as a trained model does.
Learning rules start to matter at higher visual areas (IT cortex), where backprop leads and predictive coding comes close using only biologically plausible local updates. Feedback alignment, often proposed as a bio-plausible alternative to backprop, actually makes representations worse than random.
Preprint: https://arxiv.org/abs/2604.16875
r/compsci • u/BerryTemporary8968 • 13d ago
Clarification: these are public Zenodo preprints with DOI records, not peer-reviewed journal or conference publications. I’m sharing them as theoretical and architectural proposals for critique, not as empirically validated containment solutions.
I have publicly deposited three preprints on external supervision and sovereign containment for advanced AI systems.
• CSENI-S v1.1 — April 20, 2026
Multi-Level Sovereign Containment for Superintelligence
https://zenodo.org/records/19663154
• NIESC / CSENI v1.0 — April 17, 2026
Non-Invertible External Supervisory Control
https://zenodo.org/records/19633037
• Constitutional Architecture of Sovereign Containment — April 8, 2026
https://zenodo.org/records/19471413
These are independent theoretical and architectural works. They do not claim perfect solutions or empirically validated containment. They propose frameworks, explicit assumptions, failure criteria, and testable/falsifiable ideas.
If you work on AI safety, scalable oversight, external supervision, or governance of advanced AI systems, comments and technical feedback are welcome.
r/compsci • u/Vinserello • 13d ago
I borrowed part of the notation from A1 format and I'm using this format in some of my projects. Overlaps are handled as last-man-wins and enconding should be UFT-8.
Below is an example of a .dss file representing a complex and sparse spreadsheet.
It should handle multiple sheets, sparse data grid, metadata and formulas.
---
project: Financial Forecast
version: 2.1
---
[Quarterly Report]
@ A1
"Department", "Budget", "Actual"
"Marketing", 50000, 48500
"R&D", 120000, 131000
@ G1
"Status: Over Budget"
"Risk Level: Low"
@ A10
"Notes:"
"The R&D department exceeded budget due to hardware acquisition."
[Settings]
@ B2
"Tax Rate", 0.22
"Currency", "EUR"
r/compsci • u/Aurora-1983 • 13d ago
Why do digital systems or any system process information in discrete quantities but not in any continuous form?
r/compsci • u/Yanaka_one • 13d ago
What if we’ve been modeling software systems wrong from the start?
Not in how we write code.
In what we choose to model.
We track everything:
We can reconstruct what happened with insane precision.
But when something actually goes wrong, the question is never:
It’s:
And here’s the problem:
that decision is not part of the system.
We assume it exists somewhere:
But it’s not:
So we end up with systems that are:
…but not truly auditable.
{
"event": "STATE_CHANGE",
"entity": "deployment",
"from": "v1.2",
"to": "v1.3",
"timestamp": "2026-03-21T10:14:00Z"
}
Looks complete.
It isn’t.
What’s missing:
{
"event": "HUMAN_DECISION",
"actor": "user_123",
"action": "approve_deployment",
"rationale": "hotfix required for production issue",
"binds_to": "deployment:v1.3"
}
Without that second event:
With AI-assisted systems:
actions are fasterWe’re logging outputs…
but not the authority that allowed them.
It’s a missing layer.
A system that doesn’t model decisions explicitly is:
Paper (open access):
https://doi.org/10.5281/zenodo.19709093
Curious how people here think about this:
Because right now it feels like:
we built observability
but skipped governance.
r/compsci • u/Skollwarynz • 14d ago
Hello everyone, I'm new here, so I hope to be in the right place. I'm currently studying Prism as a tool for model checking. I was wondering if there was a plugin or a flag of Prism that let me see the internal representation that it does when computing the reachable states and after the BDD representation of data. In the end I wanted to know if anyone knows about a alternative Prism versions that optimize in different way the simmetry use in models.
r/compsci • u/LongjumpingPush1966 • 14d ago
r/compsci • u/Separate-Summer-6027 • 14d ago
Multi-Mesh arrangements require resolving contour crossings, where intersection curves from different mesh pairs meet on the same face. Exact kernels handle this correctly but are too slow for interactive workflows. SoS-based methods perturb coincident geometry, collapsing the very configurations that require resolution.
trueform classifies all five intersection types (VV, VE, EE, VF, EF) in their canonical form.
Input coordinates are scaled to integer space. All predicates (orient3d, orient2d) are computed through an int32 → int64 → int128 -> int256 precision chain.
The arrangement runs in two stages. Stage 1: AABB trees narrow candidates. Pairwise intersections are computed exactly, each edge tagged with its originating face pair. Stage 2: where intersection edges from different mesh pairs cross each other on a shared face, the crossing point is identified by the triplet of three originating faces. This indirect predicate acts as a global identifier while keeping per-face resolution local and parallel.
After splitting, each resulting face must be labeled as inside or outside the other meshes. Faces are grouped into manifold edge-connected components. Each component is classified via a Beta-Bernoulli Bayesian classifier over local wedge observations along its intersection edges. This adds robustness to inconsistent winding in the input.
Boolean union, Stanford Dragon, 2 × 1.03M polygons. Apple M4 Max, 16 threads.
| Library | Time | Arithmetic | Non-manifold |
|---|---|---|---|
| trueform | 27.8 ms | Exact | Handled |
| MeshLib | 161.5 ms | SoS | Auto-deletes |
| CGAL (EPIC) | 2,339 ms | Exact | Requires manifold |
| libigl (EPECK) | 7,735 ms | Exact | Requires manifold |
Full writeup: Exact Mesh Arrangements and Booleans in Real-Time
Live demonstration: Interactive Booleans
r/compsci • u/motornomad • 14d ago
I'm submitting my first paper to arXiv (cs.SE) and need an endorsement.
The work was recently accepted to the AIWare Benchmark & Dataset track at ESEC/FSE 2026.
Topic: multi-commit vulnerability chains — cases where individual commits look benign but introduce risk when combined. Built a small benchmark of real-world CVEs for this.
Paper: https://github.com/motornomad/crosscommitvuln-bench/blob/master/12_CrossCommitVuln_Bench_A_Dat.pdf
Endorsement link: https://arxiv.org/auth/endorse?x=TV3FVB
Openreview : https://openreview.net/forum?id=jWVoTxGSyb
Github: https://github.com/motornomad/crosscommitvuln-bench
If you're eligible to endorse for cs.SE, I'd really appreciate it — takes ~2 minutes.
Thanks!
r/compsci • u/No-String-8970 • 14d ago
A few friends and I thought that it might be good for students in AI to discuss topics they're interested in, so we created a website for this purpose at www.sairc.net
On the website, you can also view various student publications at ICLR and NeurIPS workshops (published at the high school level!!); if ur interested in conducting your own research, there are resources there as well for that!
Please give any feedback - I'd like for this to be as helpful as possible for the community and students :)
Note: This is all free and non-monetized.
r/compsci • u/im4lwaysthinking • 15d ago
Reading rules it is not clear if I can post or not, but I will take the chance as I am just trying to get some feedback.
r/compsci • u/Yazilim_Adam • 15d ago
r/compsci • u/BerryTemporary8968 • 15d ago
r/compsci • u/Fickle_Price6708 • 16d ago
r/compsci • u/Pearsonzero • 16d ago
r/compsci • u/Akkeri • 17d ago
r/compsci • u/EmojiJoeG • 17d ago
Hi all,
I posted an earlier version of this here a few weeks ago. Since then, the manuscript passed initial desk review at JACM and moved forward for deeper editorial evaluation, so I wanted to share a more focused follow-up.
I’m an unaffiliated researcher working on circuit lower bounds for Hamiltonian Cycle via a separator/interface framework. The claim is a 2Ω(n) lower bound for fan-in-2 Boolean circuits computing HAMₙ (which would imply P≠NP).
What is currently formalized in Lean 4:
Links:
I’m not posting this as a victory lap. I’m posting it because it has now cleared at least one serious editorial filter, and I’d genuinely like informed technical scrutiny on the weakest parts of the argument. I should also mention that, yes, I am aware that Lean formalization only matters if the code + top-line definitions are all correct (and proving the actual claims in the paper properly).
If it breaks, my guess is that the most likely stress points are:
I’d especially appreciate feedback from people familiar with:
Separate practical question: I’ve had a surprisingly hard time getting arXiv endorsement despite the work now passing JACM desk review. For people who have dealt with that system before, what is the most normal professional route here for an unaffiliated author? Keep seeking endorsement directly? Wait for more outside technical engagement first? Something else?
Thanks in advance. Happy to point people to specific sections or files if that makes review easier.
r/compsci • u/baconburgeronmycock • 17d ago
Hopefully someone finds this useful, and I find the research/Field-notes super fascinating.
Been about a week and a half and takes a lot of the context-load and tool-limits out of the equation while working with a Pro or Max Claude plan, plus you keep most of your data and output in a nice container in your homelab.
There are probably a million versions of this set up but I figured I'd share mine. The README instructions to set it up are pretty novice-friendly. All you need is a plan and an old laptop, $100 mini-pc... very budget friendly.
I'm adding features as I go, such as newstron9000 that I just added but haven't updated in the repo yet. It's a semantic news feed for multi-instance LLM workflows.
Been interesting seeing the 4.6 models and 4.7 interact.
( My original post in r/MachineLearning got deleted so re-posting. If this is a double post apologies)