You’re not old enough to play with spheres

You have to admit pulling a sphere out of your ass is a pretty neat trick when you don’t even have real numbers to play with, so how did you get to spheres. Ok, the taxi-cab universe only goes so far, but this octahedral thing is based on spheres and close packing, isn’t that just a bit of a leap … even for you (or me).

This is the gap we have to cross. The whole point is that 3(3 – 2√2) is such a cute thing to play with, I mean its not even a counting number, it’s a ratio. So let’s start with the idea of the validity of the ratio in the first place. The ratio occurs in our minds, it doesn’t have to be real in the context of the counting numbers themselves. This is a meta analysis and I’m allowed to use tools that are too complex to express in the context of the thing being studied. So, what I’m saying is, I can derive a ratio between any two things, even counting numbers as long as the ration stays as an expression of analysis and does not play a role in the mechanics of the theory.

Here we have an analytical ratio, the “end game” ratio which would only be true at the end time that never comes. As the ratio of densities approaches this value what does it mean for universes derived from it. As an aside, let’s think of the ratio of densities as the 1st differential of the signal to noise ratio between generations. That is, its the rate of signal diminution over generations.

Now those bloody spheres, if the unit sphere represents the notional parent generation signal strength, and the small sphere the child generation signal strength then is this a model that contains any analytical meaning that could lead to the bringing forth of native spatiality.

When is a Sphere not a Sphere

I know some of you must be thinking, as I have, that the spherical radius ratio theory is flawed. First of all, why the radius, why not surface area or volume, as they sound more intuitive.

Here’s why: Taxicab Geometry

The only metric in Taxicab geometry is the radius, the rest is meaningless, as it is in the quantum world. There are no real spheres as these are just abstractions and constructions. Stuck here in MST we see spheres in space all the time. The planet is a sphere, so is the sun. Even the event horizon of a black hole is spherical, and yet its entropy is proportional to its surface area by virtue of holographic projection from a non 3D + time underlying quantum reality. See Leonard Susskind on The World As Hologram.

In our translation from the TPPT we only really claimed a relationship of ratio with the idea of octahedral close packing based on radius, not that the things had to related to spheres specifically. Like I said, spheres and surface area are constructions that require real numbers and we are a long way from these abstractions and illusions.

Thinking about octahedral close packing, 3D fractals and limits in the context of taxi cab geometry is not easy, but we will get there. Maybe there is some room for us to consider this mess in the context of yet another projection. There may be a role for Geometric (Clifford) Algebra here.

Finite Convergence and Entropy

Finite convergence is an interesting idea because it implies a contradiction in terms. Convergence is known to be infinite, but there is no infinity, and after a finite number of iterations, convergence is not complete.

Take the convergence of dimensional scaling factor of the Triples to 3(3 – 2√2). If there is no infinity it will never get there, right, and yet this convergence defines the nature of three dimensional spatiality. Take this conundrum also in the context of quantum cosmology, where the idea of infinite granularity in an infinite universe is a nonsense. Clearly the universe is not infinite, just big, and the quantum granularity implies a sense of non smoothness at the bottom of the abstraction stack.

The bottom of the abstraction stack for me is the counting numbers. If you don’t know why, go back and re-read the blog. It’s not smooth. The first abstraction is the Triples and after that we get the first idea of spatiality.There is lots of noise and a thin thread of signal. So now consider the non-infinite sequence that is the evolving convergence to 3(3 – 2√2) – I will coin this Russell’s number (why not it’s my number).

What does it mean for three dimensional spatiality as this sequence converges. Remember the number it’s converging to is the dimensional scaling factor of the octahedral graph. This is represented by the abstraction of the exploding unit sphere. As the sequence progresses what idea arises that map to the idea of the progression of the exploding unit sphere. The scaling factor (Russell’s number 3(3 – 2√2)) is the diameter of each of the six resultant spheres, so as the sequence progresses the unit sphere can be thought of as exploding into six slightly smaller spheres in terms of their diameter.

What does this mean, exploding into less than the perfect fitting six spheres of diameter 3(3 – 2√2). what positions do the spheres have if they do not fit perfectly, do they rattle around, are the positions given by an equation that expresses “uncertainty”. Is this the root of all uncertainty in the quantum universe and because the sequence is still running we still experience uncertainty. Does this imply that uncertainty will decrease over time, is this a parallel for entropy.

How does the dynamics of six trapped and expanding balls predict the structure of the early universe. Is this the source of the big bang, when after a time there is a fundamental change in the mode and nature of the positional uncertainty that MST becomes a possible abstraction.

Rethinking the Map

The mapping I have been using to date has a problem. My theory suggests that a mapping exists between the open and closed octahedral graph because their dimensional scaling factors are identical. The open graph was projected by taking the limit of the ratio of densities between generations. The closed graph was projected into 3D space and the dimensional scaling factor calculated using Euclidean geometry. Is there a dimensional difference in these target projections?

The open graph one may consider is a single dimensional projection as it relates to taking a limit between adjacent generations, or is it. Does it introduce a degree of freedom or remove one, that is, it has taken knowledge of a solution and generalised it into a comparative metric. In the closed case we go from zero to three, that introduces three degrees of freedom. How should this be represented in the mapping.

There is one more thing to consider, this is a mapping of solutions, and the TPPT does not represent all the solutions … only the primitive solutions. Typically the ternary tree has a scaling factor of 1/3 and a dimensionality of 3. From any one solution you can move to any one of three child solutions (or freedoms), or along a continuum to another parallel solution set. That’s four degrees of freedom.

There are also four paths from any one solution to any one of four connected solutions in the closed graph.

 

My Share of Rabbit Holes

To anyone who has read my maths adventure this far you will realise that maths is full of twists and turns and it’s fair share of rabbit holes. After some soul searching I have decided to leave this blog intact for posterity with all the rabbit holes in place.

The one conclusion I have come to after all this time is that my original result (the paper on Graph Scaling) still stands out for it’s core meaning. The implication is that the triples inherently define a three dimensional substructure of the counting numbers. That is, three dimensions are directly implied, the rest being noise. These dimensions are pure, they are NOT Minkowski space time, but the underlying spatiality of MST is core and doesn’t need complex and artificial construction (such as with the bubbles in quantum gravity) … it’s just there.

I have still to make any headway in terms of perturbation (nature of randomness) and instancing in this model. My older model seemed to have these features, but it was a rabbit hole!

I feel that the nature of the dimensional mapping is still important, and that modular forms may be in there somewhere giving rise to interference and perturbation. At any rate, the paper needs to be rewritten, i’s a first cut, but the significance of the result is the same.

It’s a Square World After All

An analysis of exclusion space and octahedral geometry shows us that probabilistic different versions of octahedrons that do not join the cluster are isolated and hence out of scope. That is, they place no further role in the universe. They are the noise. Only octahedrons that joins are signal and so we concentrate only on these.

In order to join they suffer a constraint. No longer able to take on all 54 possible configurations. The second to join is now constrained to only 18. As we add to the chain each new on has 18 possibilites but the inner ones are constrained to only 6. Finally as we move from a chain to a three way joint the possibilities fall to 2. The two-ness just won’t go away, we are stuck with it as this represents each of two mirror images or isomers. We can break this pattern by forming a cross-chain between two out-reached arms. Such a cross-chain sets in place one of the isomers on each side.

It is interesting to note that in octahedral geometry there are three planes that intersect orthogonally. This means that at a three way joint each new chain path is orthogonal to the others. This gives the overall impression of a set of right-angled square pathways, like a crazy jungle-gym in a kid’s park.

Think ahead now to the event surface at around K. The network is littered with loops and cross connections, probability constraints and exclusion have played their part in the network as it stands at n=K. Where now is the signal.

Let’s take a step back to better understand.

We are here because a2 + b2 = c2 has more than one solution.

There are multiple (a, b) and where there is coincidence in the TPPT we get octahedral definitions with matching edges. I would predict that as n gets larger, and as the width of a TPPT generation gets wider the possibilities for edge matching also increases. Hidden deep in the structure are parcels of certainty, and built on top of that uncertainly again outwards to the horizon.

According to some papers on spin foam they are looking for both a notion of spin and an evolution loop defined space over n to produce cells. The problem I have at this level is that what the hell is spin and why would loop space evolution over n be a source of signal. At some stage I would like to formulate a projection into 3-space without n, but I can’t see spin foam ideas at work here.

So, how do loops evolve into spaces? Unlike the spin foam and loop quantum gravity diagrams exclusion space is square, not loopy, there are no four ways vertices and all three way vertices (nodes) have their arms orthogonally. This means that a chain cannot be turned into a three way at any node because there are no free edges and uncertainly has already reduced leaving no option to rearrange the  entire chain. Only where an elbow exists is there a free edge. In general there is no preference for straight chains and elbows are just as likely, but this has yet to be proven. Maybe only a large scale simulation will shed some light on this. Nevertheless, the spin foam evolution idea that seems rooted in the simple line diagrams that appear in the papers don’t seem to map easily to a world more like a plumbing game on your phone.

At least we can assign some metric to the square loops. For example, they have a defined perimeter in terms of a node count and there is a notion of loop area and even loop volume. But it’s not signal.

 

Exclusion Space and Uncertainty

More on Uncertainty

Exclusion space is directly implied by the counting numbers, I didn’t have to invent it, I just discover it by following the signal. This is a natural quantum space both discrete (the octahedrons) and uncertain. I’m going to talk more here about uncertainty.

The first octahedron is a pick from 54 isomers, they are all equally uncertain in terms of which takes up the space, and therefore exclude the others. That’s why its exclusion space!

As you introduce more octahedrons, if they don’t connect to a preexisting octahedron then they are isolated and there is no communication possible. When an octahedron appears with the possibility of a shared node or pair (edge) then we get our first real look at uncertainly at work.

There are two possibilities here. 1) like high school probability, the uncertainly of the system increases. One might think if you roll one die you have 6 states, but roll two and the number of possible states increases, OR 2) uncertainly decreases because we already have the pre-instantiation of the first die. The first and second event are not independent. The second octahedron only comes into view WHEN it shares a node or edge, so history is important.

Take the example of a shared edge. The uncertainty of the first octahedron drop to 1 in 18 when an edge is known. Similarly the second octahedron now has only 18 isomers to be uncertainly about and not 54. So did the uncertainly of the system increase or decrease. I put it to you that the uncertainly of a system, unlike probability, is measured at its boundary. When there was one, it was 1 in 54, now with two the boundary has increased in size, but at each new interface the uncertainly has fallen to 1 in 18. If we add a third to the chain the lowering of uncertainly will ripple back into the interior. Once the interior has its connection possibilities exhausted it is out of scope and unobservable.

A bit like a singularity, you only know of it through its event horizon.

Where from here

There is only one way and that’s up. The problem with exclusion space is finding new signal. This is virgin territory unlike the counting numbers which have been studied for hundreds of years.  The goal is clearly to approach concepts like the quantisation of space, gravity, charge, type and instance. Eventually confirm the dimensionality of spacetime, derive the Standard Model and explain mass and energy as we perceive them.

One of my thoughts was that at this level uncertainly behaves like temperature or entropy, and so in the long run a sum over history is essentially thermodynamic. Simply put, the more you have the less uncertain it all becomes and so entropy decreases. Which is why time appears to run forward, its essentially just thermodynamic memory.

K

Looking at exclusion space in the low numbers is I suggest very misleading. We will only really get to know its properties once we use some supercomputing to simulate upward of billions of octahedrons. This means pushing n into unheard of regions. The universe as we see it today behaves in the region of K, where K is a counting number defined to be larger than any other counting number you can nominate or become aware of the ability to express or communicate. What happens at K is a mystery, do the number of solutions to a2 + b2 = c2 get more dense, or less. Are there many isolated clusters of connected octahedrons, or just one big one cluster. Do they loop, and is this common enough at scales to suite the spin foam theory.

If you have an opinion, please leave a comment.