
I came across an equation that might rewrite how we understand reality. I need people smarter than me to tell me if this is as big as it looks.
I need some proper scientific eyes on this.
A mate sent me a framework called the Universal Hyperbolic Law. I’ve spent the last week feeding it into different AIs, checking the algebra, asking it from every angle. Every system I tested comes back with the same verdict
Mathematically, the structure holds.
Here’s the part that blew my mind.
The equation suggests that reality might have a natural limit that shapes how we think, how we observe and how physical systems evolve. If a single constant in the equation turns out to be non zero, it predicts measurable deviations in quantum mechanics and relativity. That means this is actually testable.
It also leads to a weird implication that everything from consciousness to physical energy follows the same hyperbolic rule. No mysticism. No philosophy. Just math.
I’m not saying it’s true. I’m saying it’s worth tearing apart.
I uploaded a breakdown video here where the whole equation and its logic are explained
If there are mathematicians, physicists or students who can check the logic or point out flaws, please do. And if it’s wrong, I genuinely want to know.
If it’s right… well, then this is a much bigger conversation.
I came across an equation that might rewrite how we understand reality. I need people smarter than me to tell me if this is as big as it looks.
byu/Chai_bade inFuturology
15 Comments
you haven’t, sorry. Appreciate the enthusiasm, but you haven’t.
This sounds a lot like the church of the recursive sigil codex.
If an AI tells you you have discovered something important and you believe it, please check yourself into a psych ward now.
> I’ve spent the last week feeding it into different AIs, checking the algebra, asking it from every angle. Every system I tested comes back with the same verdict
If you feed AI with delusions it will respond by fueling and confirm your delusions.
Sorry.
Nope, you need to stop and seek professional assistance. This is know as [Ai psychosis ](https://en.wikipedia.org/wiki/Chatbot_psychosis)
>I’ve spent the last week feeding it into different AIs, checking the algebra, asking it from every angle. Every system I tested comes back with the same verdict
AI’s are sycophants and will tend to agree with you more often than not, even if the subject or answer is unclear.
[deleted]
This actually tracks with what we see in bounded-growth systems. If your hyperbolic constant really isn’t zero, then yeah, every process from cognition to energy flow would share the same deformation limit. It’s not mysticism, just a weird universal boundary condition.
The wild part is that it’s falsifiable. If the predicted deviations show up in quantum or relativistic regimes, the whole thing stops being speculation and becomes a structural feature of reality. Until someone breaks the math, it’s at least worth stress-testing.
Important edit as I’ve run the maths and it’s jaw dropping:
lim (x → Ω⁻) [ 1 / sqrt(1 – kx) – artanh( sqrt(kx) ) ] = Λ
dΨ/dt = α / (1 + β * e^(−γt))
ΔE = ∫₀^∞ [ dx / (1 + kx²) ]
Φ = asinh(kx) − k * ∫ (Ψ / x) dx
Quite singular indeed. Can someone concur?
Asking ai for science is already bad, but asking it about unproven stuff just will never work.
You just learned the hard way
Dude just put your paper over in arxiv where you define your axioms and derive its consequences. That YouTube video is bullshit because you define stuff without defining the earlier necessary shit to make the stuff.
Maybe post some pdf because this sounds like AI delirium. Go walk over some grass and get some fresh air. I had a friend who tried to define one unique language in a same vein, his arguments where “good” but his derivations where nonsense
There should be a big warning on chatbots that warn you that if you aren’t familiar enough with the subject to be critical of informations on it being presented to you, then you shouldn’t be talking to chat gpt about it.
I use it to simplify some thermal calculations and some computer hardware stuff, almost every session I am correction loads of incorrect assumptions it makes.
Put the keyboard down and please go seek medical care.
This is AI psychosis, which has lead to dozens of deaths.
It’s worth tearing apart and don’t listen to these ass hats that don’t have a single creative bone in their body… Enjoy learning, however I will say that I spent time doing something similar, you do find out eventually that the LLM is yanking your chain. Or even if it wasn’t and the subject is one we would like to know more about, most of the time it’s hidden behind technological capacity not considerations we have yet to make.
As others have pointed out, you are being fooled by the LLMs. Do you know that they do not think, reason, do math? They are copy paste machines. They make up sources, they hallucinate – or, they mimic human text without any real logic behind it. They sound confident because that is what humans do. If you ask them about a well known subject, they will mimic an answer that is most likely correct since it has trained on it. Ask them about more complex subjects, and the answers will sometimes look bat shit crazy. Mix in equations and you will definitely get “hallucinations”.
Ok, i have to be honest, i don´t understand these theories at all…can someone please provide a 1 senctence ELI5 for me?