New Blocks To Play With

We have s set of new blocks to play with. Instead of octahedrons with the a’s and b’s of three solutions at the vertices we now have each of the six points representing the six pathways to the three solutions.

At each end of a solution pair we have (a,b) and (b,a). These are unique and as such we can not join these blocks up into solution chains. To do this we need solutions. Let’s instead label each vertex as c instead of (a,b), now we have a c at each end and it destroys the information about there being two pathways. Instead label the vertices as +c and -c. We can have a rule such that the c is negative if a-b is negative. It doesn’t really matter as long as we are consistent.

Now we can start to chain out blocks again and we have re-derived a spin foam like structure again. The correction to the mapping has led to a different way of recognizing the joining of octahedral solution clusters into chains and loops, but they are still there.

It also introduces new limitations on chain formation. For example, same c’s will only be found across generations as they are unique within a generation.

The next step is to be able to project these dimensionally and acheive the dual outome of projective instancing at a useful 2D density to support MST.

Advertisements

Commutative Law of Addition

I’ve been having a problem lately with the mapping from the open 4-vertex graph to the closed (Octahedral) graph. To put it bluntly, the open graph appears to split the fractal 3 ways, that is, it has a fractal scaling factor of 1/3 whereas the octahedral closed graph appears to have a scaling factor of  1/6.

This is clearly mismatched and it took me a while to understand why. The root cause is the definition of the TPPT in the first place. It’s primitive in two separate ways. Firstly we consider only the primitive (a, b, c) in terms of scale, so that (2a, 2b, 2c) is considered the same, and we also consider that a2 + b2 = b2 + a2 by the commutative law of addition.

This is true when you are considering only where the sum lands, that is, c2 is in N. What is no so obvious is that these two resolve to two different solutions and two different but similar triangles. We need Euclid to understand that triangles can be similar, but we are pre-Euclid so its enough to know that they are different.

I propose another version of the TPPT, in this one it is a tree with not three (3) children but six(6). We recognize that (a, b, c) and (b, a, c) ARE NOT THE SAME.

Painting a Picture of a Quantum Surface

Intro

In my last post I talked about spheres and taxi-cab geometry. Let me now put it all together and paint a multi-step picture that takes us from here is non-instancing multi-dimensional strings.

Step 1 – Exclusion Space

This is where I’m choosing to start. Exclusion space is based on the TPPT forming a space of octahedrons where they joins at points (and maybe much more rarely) at edges to form chains and clusters. This space is bounded dimensionally, so where two TPPT solutions share a point you get a join, but when there are three or more they exclude each other in such a way that the solutions all co-exist but not at the same time. If introduce the idea of historical precedence, then earlier octahedrons maintain their probability and later ones exhibit 1/n lower probabilities, ie. 1/2, 1/3, 1/4 etc

Step 2 – Active Clusters

Imagine now that there grows a primary cluster, maybe there is just one, maybe there are many (a multiverse). Let’s take one and ask the question, how big it is and are their any natural limits on it’s size and rate of growth, does it reach some limit and become stable.

For a start, we have already show that the density of the TPPT decreases with each generation. At some point per n (the rate of the counting number iterator) solutions become more rare and as a consequence octahedrons capable of joining the cluster are also less frequent. These ideas tend to increase the stability of the cluster. At some n = K, which is an inconceivably large number the cluster is at equilibrium with itself.

If we go with the historical precedence model (or another) we see probabilities 1/1, 1/2, 1/3 etc close to the center. As we go towards the outside the threads get seeded with combinatorial lower probabilities. At the surface a sparse matrix contains parts that are blinking in and out of existence with, maybe more off than on.

Step 3 – Patterns on the Surface

At the surface we can interpret the mountainous crags and caverns blinking on and off. But, interpret them as what? Let’s try and see if there are any patterns. Considering the surface as a whole (as there is no reason or limitation that forces us to consider only a subset) imagine the surface a sparse 2D matrix of bits. All the complexity of the inside structure and the combinatorial probabilities are now reflected on this matrix as the bits blink on and off.

Consider a set of bits that share the same pattern. Consider the entire surface analysed down into p distinguished patterns. I suggest we ignore the temptation to order them, as one pattern is really as good as any other, as long as a pattern is unique, the number of members is irrelevant.

Step 4 – Loops

Let’s start imposing our previously talked about constraints on solutions. That is, it’s time to look again for the signal in the noise. Frankly the above paragraphs describe a whole bunch of noise, so where is the signal. I propose that the signal is in the loops. Let’s do two chops in one go. Let’s ignore all the historical generations and focus on the generation boundaries. Within each generation focus on the loops and ignore all the tendrils. Loops have length, a critically important metric. I have suggested that loop length is akin to the spin in spin foam theory. This then leads on to theories of quantum gravity and the mechanism for the apparent creation of new space in the projection to MST.

You’re not old enough to play with spheres

You have to admit pulling a sphere out of your ass is a pretty neat trick when you don’t even have real numbers to play with, so how did you get to spheres. Ok, the taxi-cab universe only goes so far, but this octahedral thing is based on spheres and close packing, isn’t that just a bit of a leap … even for you (or me).

This is the gap we have to cross. The whole point is that 3(3 – 2√2) is such a cute thing to play with, I mean its not even a counting number, it’s a ratio. So let’s start with the idea of the validity of the ratio in the first place. The ratio occurs in our minds, it doesn’t have to be real in the context of the counting numbers themselves. This is a meta analysis and I’m allowed to use tools that are too complex to express in the context of the thing being studied. So, what I’m saying is, I can derive a ratio between any two things, even counting numbers as long as the ration stays as an expression of analysis and does not play a role in the mechanics of the theory.

Here we have an analytical ratio, the “end game” ratio which would only be true at the end time that never comes. As the ratio of densities approaches this value what does it mean for universes derived from it. As an aside, let’s think of the ratio of densities as the 1st differential of the signal to noise ratio between generations. That is, its the rate of signal diminution over generations.

Now those bloody spheres, if the unit sphere represents the notional parent generation signal strength, and the small sphere the child generation signal strength then is this a model that contains any analytical meaning that could lead to the bringing forth of native spatiality.

When is a Sphere not a Sphere

I know some of you must be thinking, as I have, that the spherical radius ratio theory is flawed. First of all, why the radius, why not surface area or volume, as they sound more intuitive.

Here’s why: Taxicab Geometry

The only metric in Taxicab geometry is the radius, the rest is meaningless, as it is in the quantum world. There are no real spheres as these are just abstractions and constructions. Stuck here in MST we see spheres in space all the time. The planet is a sphere, so is the sun. Even the event horizon of a black hole is spherical, and yet its entropy is proportional to its surface area by virtue of holographic projection from a non 3D + time underlying quantum reality. See Leonard Susskind on The World As Hologram.

In our translation from the TPPT we only really claimed a relationship of ratio with the idea of octahedral close packing based on radius, not that the things had to related to spheres specifically. Like I said, spheres and surface area are constructions that require real numbers and we are a long way from these abstractions and illusions.

Thinking about octahedral close packing, 3D fractals and limits in the context of taxi cab geometry is not easy, but we will get there. Maybe there is some room for us to consider this mess in the context of yet another projection. There may be a role for Geometric (Clifford) Algebra here.

Finite Convergence and Entropy

Finite convergence is an interesting idea because it implies a contradiction in terms. Convergence is known to be infinite, but there is no infinity, and after a finite number of iterations, convergence is not complete.

Take the convergence of dimensional scaling factor of the Triples to 3(3 – 2√2). If there is no infinity it will never get there, right, and yet this convergence defines the nature of three dimensional spatiality. Take this conundrum also in the context of quantum cosmology, where the idea of infinite granularity in an infinite universe is a nonsense. Clearly the universe is not infinite, just big, and the quantum granularity implies a sense of non smoothness at the bottom of the abstraction stack.

The bottom of the abstraction stack for me is the counting numbers. If you don’t know why, go back and re-read the blog. It’s not smooth. The first abstraction is the Triples and after that we get the first idea of spatiality.There is lots of noise and a thin thread of signal. So now consider the non-infinite sequence that is the evolving convergence to 3(3 – 2√2) – I will coin this Russell’s number (why not it’s my number).

What does it mean for three dimensional spatiality as this sequence converges. Remember the number it’s converging to is the dimensional scaling factor of the octahedral graph. This is represented by the abstraction of the exploding unit sphere. As the sequence progresses what idea arises that map to the idea of the progression of the exploding unit sphere. The scaling factor (Russell’s number 3(3 – 2√2)) is the diameter of each of the six resultant spheres, so as the sequence progresses the unit sphere can be thought of as exploding into six slightly smaller spheres in terms of their diameter.

What does this mean, exploding into less than the perfect fitting six spheres of diameter 3(3 – 2√2). what positions do the spheres have if they do not fit perfectly, do they rattle around, are the positions given by an equation that expresses “uncertainty”. Is this the root of all uncertainty in the quantum universe and because the sequence is still running we still experience uncertainty. Does this imply that uncertainty will decrease over time, is this a parallel for entropy.

How does the dynamics of six trapped and expanding balls predict the structure of the early universe. Is this the source of the big bang, when after a time there is a fundamental change in the mode and nature of the positional uncertainty that MST becomes a possible abstraction.

Rethinking the Map

The mapping I have been using to date has a problem. My theory suggests that a mapping exists between the open and closed octahedral graph because their dimensional scaling factors are identical. The open graph was projected by taking the limit of the ratio of densities between generations. The closed graph was projected into 3D space and the dimensional scaling factor calculated using Euclidean geometry. Is there a dimensional difference in these target projections?

The open graph one may consider is a single dimensional projection as it relates to taking a limit between adjacent generations, or is it. Does it introduce a degree of freedom or remove one, that is, it has taken knowledge of a solution and generalised it into a comparative metric. In the closed case we go from zero to three, that introduces three degrees of freedom. How should this be represented in the mapping.

There is one more thing to consider, this is a mapping of solutions, and the TPPT does not represent all the solutions … only the primitive solutions. Typically the ternary tree has a scaling factor of 1/3 and a dimensionality of 3. From any one solution you can move to any one of three child solutions (or freedoms), or along a continuum to another parallel solution set. That’s four degrees of freedom.

There are also four paths from any one solution to any one of four connected solutions in the closed graph.