sci fi reality…..force becomes relative

 

data on any function in variance of a force relativity

FORCE MAKES IT INTO RELATIVITY

By Henryk Szubinski

as a approach to UNIVERSES as  a reversal of a objective towards continuiims one level is basically relative or it would have no future.

a 1 bit definition of the multiples related to as ( x,y) =1 are basically non finite: the values of the following data defines the interactions of computations that can use a value in similarity meaning that a process of a z value can be included or a extension of the parameters of any value decimal or whole number as a interaction with any force computation in total universal limitations of a x = multiple whole number as  x approaches a decimal value: in this regard a incline can be gauged to a value of a object on the incline and connected to the limits of a non singularity of the sections of the zero polarity:

File:MA2PoleZero C.png

a zero pole diagramm seen in:

The difference equation that defines the output of an FIR filter in terms of its input is:

\ y[n]=b_0 x[n] + b_1 x[n-1] + \cdots + b_N x[n-N]

where:

  • x[n] is the input signal,
  • y[n] is the output signal,
  • bi are the filter coefficients, and
  • N is the filter order – an Nth-order filter has (N + 1) terms on the right-hand side; these are commonly referred to as taps.

This equation can also be expressed as a convolution of the coefficient sequence bi with the input signal:

\ y[n] = \sum_{i=0}^{N} b_i x[n-i].

That is, the filter output is a weighted sum of the current and a finite number of previous values of the input.

response computation difference

A finite impulse response (FIR ) filter is a type of a digital filter. The impulse response, the filter’s response to a Kronecker delta input, is finite because it settles to zero in a finite number of sample intervals. This is in contrast to infinite impulse response (IIR) filters, which have internal feedback and may continue to respond indefinitely. The impulse response of an Nth-order FIR filter lasts for N+1 samples, and then dies to zero.

In computer science, real-time computing (RTC), or “reactive computing”, is the study of hardware and software systems that are subject to a “real-time constraint”—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or preferred. The needs of real-time software are often addressed in the context of real-time operating systems, and synchronous programming languages, which provide frameworks on which to build real-time application software.

A real time system may be one where its application can be considered (within context) to be mission critical. The anti-lock brakes on a car are a simple example of a real-time computing system — the real-time constraint in this system is the short time in which the brakes must be released to prevent the wheel from locking. Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load

Multi Universes bit -mission critical level of failure of system load in presence:

freedoom of knowledge basis 5th framework Chordis6

the only format of a O2 complexity entered into the hemi value of the force in controll as = the positron value of a subliminal -1 level in the process of exchanged formats of artificial intelligence:

1)

data volume input=to quantal reversal of the process in absorbtions spectra

2)the level 1 definition as a bit value = x

defines the usage of the output of the specific rate of decrease by simply letting the volume be used on a level measurement as in stages of a 1 to 10

1……………………………………………………………..10

the basics of a process in reversal of how much volume to release at each of the subsequent stages would be 1/10 x Volume

3)

as defined to be a basis of a continuiim of a spacetime type the values must have :

input , volume,capacity and limited value:

as a usage of theese requirements , the values of a continuiim must continually alter its format : as a alterability value of the set resultant subtracted from its own value so that the result = 1

so that dividing the process the value = 1

(In . Vol1 )/ / ( in Vol 2) = 1+1 ( as a string complex of a waveform division of its similar counterpart and also the subtractions as being equal in result: the 3 rd value must be relative

=2 as a basic relative format in their comparatives the 1/3 remaning in the relative relations there must be 2/3 of a relative value as on a level where a objective number of B bit values on a vector connected by a similar process of division and subtraction would be both combined into a compression of tor or its seperation which gives the x1-y/y-x1

type of formulation :

(in .volume 1 – in volume x)/ ( in volume 1 + in .volume x)= Force of vector relations throughout the universe

3 a)is of the ability of this type of computation as the stability of a relation with spacetime as having a future the usage of the formulation defined here as a 10 D application where there are8 involvances of the formats with the remaining 2 being the audial frequency of future design computers in their design of sci fi real capacity for space explorations as a type wavelength with its response to be reused in the formulations of the divisive into 2 by the the same proceedure used here with a frequency and a height or amplitude.

sci fi reality….super sized space vechicles

 

s u p e r   s i z e d   s p a c e    v e c h i c l e s

or just a speeder bike motor

By Henryk Szubinski 

Simulation software is based on the process of imitating a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but is also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mock up of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome.

Advanced computer programs can simulate weather conditions, electronic circuits, chemical reactions, mechatronics, heat pumps, feedback control systems, atomic reactions, even biological processes. In theory, any phenomena that can be reduced to mathematical data and equations can be simulated on a computer. Simulation can be difficult because most natural phenomena are subject to an almost infinite number of influences. One of the tricks to developing useful simulations is to determine which are the most important factors that affect the goals of the simulation.

In addition to imitating processes to see how they behave under different conditions, simulations are also used to test new theories. After creating a theory of causal relationships, the theorist can codify the relationships in the form of a computer program. If the program then behaves in the same way as the real process, there is a good chance that the proposed relationships are correct.

stardestroyer type vechcularity derived from a very expanded format of a temporal law in the usage of B Bit. x values as the response to a value in usage of gravity as the

t + g = 1/2 B.x

the data is generally considered as a hold on the force as a value in the drift of a super vechicle but it is also the general format for simulated force in the developments of a value = continuiims of the process by which a string value can be simulated with a very large surface area of which the basic data on drift = logarythmic data

has basic data values which imply simulations with the procxess as the active definition of a aquired process of data in the 5th zone of the data prosterior of audial prosterior hemis value of a larger than large vessle in its force of distribution on data fields of the general deprogrammed values and their 1/2 STRING = a basic level off ( as the data in response to a isolations effect on the terminals used to define the  whole system break up on its seperate constituent parts as full capable of doing a gee main mast manouvre into warp speed and do the manouvre through its nose section by a full 45 degree turn from the position of its warp to a warp effective rotation of the generator that compilsed the data into a force of free ware flow into a string spacetime tugg.

logarithmic simulations sotware developments are:

In mathematics, the logarithm of a number to a given base is the power or exponent to which the base must be raised in order to produce the number.

For example, the base-10 logarithm of 1000 is 3, because 3 is the number of 10s that must be multiplied together to get 1000: 10 × 10 × 10 = 1000; the base 2 logarithm of 32 is 5 because 5 is how many 2s must be multiplied together to get 32: 2 × 2 × 2 × 2 × 2 = 32. In the language of exponents: 103 = 1000, so log101000  = 3, and 25 = 32, so log2(32) = 5.

The logarithm of x to the base b is written logb(x) or, if the base is implicit, as log(x). So, for a number x, a base b and an exponent y,

\text{ if }x = b^y,\text{ then }y = \log_b (x)\,.

An important feature of logarithms is that they reduce multiplication to addition, by the formula:

 \log (xy) = \log x + \log y \,.

That is, the logarithm of the product of two numbers is the sum of the logarithms of those numbers. The use of logarithms to facilitate complicated calculations was a significant motivation in their original development.

File:Camposcargas.PNG

magnetic lines of force have a obvious relation to a logarithm in its rotation of the axial reference point in relation also to the illusion of the bundle zones as a type collagulation of lines which by a obvious comparative with magnetic lines of force have similarities in the same relations of the logarithm in its multiple states of x=1

Usage of a x value = 1 implies that the magnetic interactions of force vectors has altered the positionality of the x,y field into a similarity with algorithmic measurement system rotations into a field of a connective section where the lim x approaches the y values.

Logarithms have applications in fields as diverse as statistics, chemistry, physics, astronomy, computer science, economics, music, and engineering.

File:Logarithms.svg

a type 3 D side view of a process star destroyer vechicle warp drive :

as a type format for comroll of a 3D basis of logarithmic data developments with a fully functional 3D design as the simulative software in a applications by the waterfall model which can use a seperated format or a seperation of the method used into a segmentation:

The waterfall model shows a process, where developers are to follow these steps in order:

  1. Requirements specification (AKA Verification or Analysis)
  2. Design
  3. Construction (AKA implementation or coding)
  4. Integration
  5. Testing and debugging (AKA validation)
  6. Installation (AKA deployment)
  7. Maintenance

After each step is finished, the process proceeds to the next step, just as builders don’t revise the foundation of a house after the framing has been erected

research data and images facilitated compliments of wikipedia

sci fi reality….the vechicle that seperates lines of force

 

the drop magnet flying car

By Henryk Szubinski

freedoom of knowledge 5th law 5th framework

specified override of opposition: CONNECTING 3 SPEHERS ON A INCLINE BY A CIRCUIT FORCE type 1 voltage

Under general relativity, gravity is the result of following a geometry caused by local mass-energy. Although the equations cannot produce a “negative geometry” normally, it is possible to do so using a “negative mass”. The same equations do not, of themselves, rule out the existence of negative mass.

Both general relativity and Newtonian gravity appear to predict that negative mass would produce a repulsive gravitational field. In particular, Sir Hermann Bondi proposed in 1957 a form of negative gravitational mass that could comply with the strong equivalence principle of general relativity theory and the Newtonian laws of conservation of linear momentum and energy. Bondi’s proof yielded singularity free solutions for the relativity equations.[8] In July 1988, Robert L. Forward presented a paper at the AIAA/ASME/SAE/ASEE 24TH Joint Propulsion Conference that proposed a Bondi negative gravitational mass propulsion system.[9]

Every point mass attracts every other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between the point masses:

\mathbf{F_{12}} = G \frac{(-m_1) m_2}{r^2}\mathbf{r_{12}} = G \frac{m_1 m_2}{r^2}\mathbf{r_{21}} = \mathbf{-F_{21}},

where:

  • F12 is the magnitude of the gravitational force between the two point masses,
  • G is the gravitational constant,
  • m1 <0 is the (negative) mass of the first point mass,
  • m2 >0 is the mass of the second point mass,
  • r is the distance between the two point masses.

 

3 OBJECTS ON 3 INCLINES WILL FALL FATER THAN 1 DOES ON 1 INCLINE even by a connection of the 3 by a circuit resitor to exclude influences of gravity flux

as flux / 3 = g.R

definitions of vector vergance =formatted data on a non turn in the divisions of fold back

process by resistance=data on the usage of interval height for the spacial inclusions on minimal values of a drop into a spacetime relative to the position of the user as non exceeding the height of the string in non seperations of qualificative string values

of the multiple resistance of a resistance in process=as the data on the usage of a parameter that dissincludes the formats of data as intrusive on the parameters of a value to caclulate 2 branes in their relative mean distance of open / close formats as the basics of the process on the non velocity of engegement by any means but by the usage of a interface interaction on the value of the parameter of string usage as a force barrier

aquired data on the usage of high values of process reliance on data previous…

the 3 pint sphere force connection:

(.)———————-(.)———————-(.)

it would not matter what is exchanged for what

any of the values that are responsive in Force along a incline can be exchanged for any of the acellerations or its mass values:

the resultant fore of seperated formats oin their stabilised states of a similarity with the basics of newtonian motiion as the basis of a value in accertion of lift:so that the system load used for iron fliings can be a specified relational equaliserr of the dimensions of mass that is in 3 x = F

conserved by the vector link = 1/2 S

and also by the concept usage of the accelerative values of a volume in specific quant values of a interaction of the magnetic volume and the  Fe mass / S = Volume / radials

same as in the values of force , the data on the levels of quantality by a vacated volume in transition by pressure, the formats for the vertical vector of the fallen force = to the volume of a pressure altered force in opposition as the main radial delivery tube specifics of parameters for the interactions of force by lines of force in the interactive state of the 3 volumes

research data courtesy of wikipedia

sci fi reality….the perspective users view of the universe is the best one

 

the accertions sequence:of universal spectrality

a view dependant on the perspective of the user…

By Henryk Szubinski

when the Markov decision has to define the star classifications as a problem of similar galaxy classifications there has to be  a involvance with non quark meson models that can describe the usage of a interactive A.I cognitions computation in lattice QCD as the process by which most if not all galaxies are a store for gallium in its 99.999 % purity as a format for the spectrality in cooling of a universal heat death …of a big CRUNCH in the numbers of universal values of FORCE:= POMDP time process in the user of the interface interactions with cognitive a.i data store.

summarised as :

The data show five isoscalar resonances:

f0(600), f0(980), f0(1370), f0(1500), and f0(1710).

Of these the f0(600) is usually identified with the σ of chiral models. The decays and production of f0(1710) give strong evidence that it is also a meson.

A Partially Observable Markov Decision Process (POMDP) is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must infer a distribution over the state based on a model of the world and some local observations.

File:Morgan-Keenan spectral classification.png

Most stars are currently classified using the letters O, B, A, F, G, K and M, where O stars are the hottest and the letter sequence indicates successively cooler stars up to the coolest M class. According to an informal tradition, O stars are “blue”, B “blue-white”, A stars “white”, F stars “yellow-white”, G stars “yellow”, K stars “orange”, and M stars “red”, even though the actual star colors perceived by an observer may deviate from these colors depending on visual conditions and individual stars observed. This non-alphabetical scheme has been developed from an earlier scheme using all letters from A to O, but the star classes were reordered to the current one when the connection to the star’s temperature became clarified,

when temperature became clarified the temperature range of the objectives in place of a minimally differenciated view of the universe as a symbiotic model became a process by which data was fastly becoming the process to use up the relations and the resultant dark matter problem increased:

and a few star classes were omitted as duplicate of others. (The mnemonic “Oh, be a fine girl /guy, kiss me” is sometimes used.)

type of classifications used prior to a altered format on the way we saw the stars as;

white——-blue——green—–yellow——red——dark

in a galctic view that uses up the non dark matter clarifications by dark matter standards would have maintained that the galaxies were highly spactral in their types by a degree of differenciation based on the divisions of space time by clear regions of their similar formats:

the sequenced order became the:

M.K.G.F.A.B.O

inplace of the :

O.B.A.F.G.K.M

obviously a reversed sequence of events that has led to some theories basis on time as a temperature increase or its decrease using up the displacement of a general COBE = 2 S ( T /10)

where there are 10 levels mooving forwards and then returning:

a 10 Dimensional hyperspace in the unievrses generalisation of the S =

T/10 in the 10 range of no LINKAGES of a 1/10 T

1/10 T =( 2S F /10)

where the T is exchanged for a FORCE

the process of expansions of this view into the minimal lumioscity you can see on the more dvanced formats of representation is basically more luminous than the standard models used and the direction of a general spectral data sequence where temperature was concerned:

as such the data on the formats of usage of a :” Oh be a fine girl kiss me”

can also be used in the new model universe where spectrality is evenly distributed and the main force of burn overs are cored b y galaxies:

the :type OBAFGKM sequence as for galaxies would be:

overdecision  is the best  force because it gets kinder for myone:

AS A TYPE 1 REFERENCE REVERSAL THE DATA ON THE SEQUENCE ABOVE WOULD BE:

MY FORCE GETS FORCE BECAUSE OF OVERDECISION

and this then is the force freeze

=  in basic dimensions of data

 on the process by which enthropy cooling is =defined as a shock effect

 on the data of the process in minimalised meso particle values=

 of a retracted coolness in the process

by which the spacial dynamics =

can be audially simulated into a greater universe volume

than first thought with dark matter:

File:Blue LED and Reflection.JPG

Gallium arsenide (GaAs) and gallium nitride (GaN) used in electronic components represented about 98% of the gallium consumption in the United States.[11] World wide gallium arsenide makes up 95% of the annual global gallium consumption.[17] The semiconductor applications are the main reason for the low-cost commercial availability of the extremely high-purity (99.9999+%) metal: As a component of the semiconductor gallium arsenide, the most common application for gallium is optoelectronic devices (mostly laser diodes and light-emitting diodes.) Smaller amounts of gallium arsenide are use for the manufacture of ultra-high speed logic chips and MESFETs for low-noise microwave preamplifiers.

File:Exotics.png

Non-quark model mesons include

  1. exotic mesons, which have quantum numbers not possible for mesons in the quark model;
  2. glueballs or gluonium, which have no valence quarks at all;
  3. tetraquarks, which have two valence quark-antiquark pairs; and
  4. hybrid mesons, which contain a valence quark-antiquark pair and one or more gluons.

 

Lattice QCD predictions for glueballs are now fairly stable, at least when virtual quarks are neglected. The two lowest states are

0++ with mass of 1611±163 MeV and
2++ with mass of 2232±310 MeV.

The 0−+ and exotic glueballs such as 0−− are all expected to lie above 2 GeV. Glueballs are necessarily isoscalar, with isospin I=0.

The ground state hybrid mesons 0−+, 1−+, 1−−, and 2−+ all lie a little below 2 GeV. The hybrid with exotic quantum numbers 1−+ is at 1.9±0.2 GeV. The best lattice computations to date are made in the quenched approximation, which neglects virtual quarks loops. As a result, these computations miss mixing with meson states.

as a type reference to the types of value ahead of spectrality the projective estimates should be for a galactive view where colourations is the general motivation. As such the the data on the force of the minimalised process activity in oisolated regions of space time as a approach function of Lorentz type attractors and their influence on the buffers to a personal relations of STRINGS with making the situation of the view more friendly and basic to the usage of appreciation levels defined as the formats where any disruptive value of a data displacement ahead of the ext = 3 level of force influence in the connections of a.i type cognitive nodes applied to the cereberal cortex in a 1/2 AS ADVANCIVE OF fORCE = to a response to retract the astrophysical data and to use the alterated value of the 3 D imagery in a 10 D simulation of the users cognition = minimally disruptive formats of the basics in what the objective user of the a.i is in connection with by using the waveforms of the audial landscape that gets the best responses, so that dark matter formats of a undecisive non clarification can be used as a volume for accertion of the positive values in the observer of astrophysics and the observable universe.

A discrete-time POMDP models the relationship between an agent and its environment. Formally, a POMDP is a tuple (S,A,O,T,Ω,R), where

  • S is a set of states,
  • A is a set of actions,
  • O is a set of observations,
  • T is a set of conditional transition probabilities,
  • R: A,S \to \mathbb{R} is the reward function.

Each time period, the environment is in some state s \in S, the agent takes an action a \in A. Taking action a makes the environment transition to state s‘ with probability T(s'\mid s,a) and the agent receives a reward for it, with expected value r(a,s).

[edit] Belief update

In s‘, the agent observes o \in O with probability \Omega(o\mid s',a). Let b be a probability distribution over the state space S: b(s) denotes the probability that the environment is in state s. Given b(s), then after taking action a and observing o,

 b'(s') = \eta \Omega(o\mid s',a) \sum_{s\in S} T(s'\mid s,a)b(s)

where \eta=1/P(o\mid b,a) is a normalizing constant with P(o\mid b,a) = \sum_{s'\in S}\Omega(o\mid s',a)\sum_{s\in S}T(s'\mid s,a)b(s).

Since the state is Markovian, maintaining a belief over the states solely requires knowledge of the previous belief state, the action taken, and the current observation. The operation is denoted b‘ = τ(b,a,o).

[edit] Belief MDP

The policy maps a belief state space into the action space. The optimal policy can be understood as the solution of a continuous space so-called belief Markov Decision Process [1] (MDP). It is defined as a tuple (B,A,τ,ρ) where

  • B is the set of belief states over the POMDP states,
  • A is the same finite set of action as for the original POMDP,
  • τ is the belief state transition function,
  • r:B,A \to \mathbb{R} is the reward function on belief states. It writes :

r(b,a) = \sum_{s\in S} b(s) R(s,a).

Note that this MDP is defined over a continuous state space.

[edit] Policy and Value Function

The agent’s policy π specifies an action a = π(b) for any belief b. Here it is assumed the objective is to maximize the expected total discounted reward over an infinite horizon. When R defines a cost, the objective becomes the minimization of the expected cost.

The expected reward for policy π starting from belief b is defined as

 J^\pi(b) = E\Bigl[ \sum_{t=0}^\infty \gamma^t r(s_t,a_t) \mid b, \pi \Bigr]

where γ < 1 is the discount factor. The optimal policy π * is obtained by optimizing the long-term reward.

 \pi^* = \underset{\pi}{\mbox{argmax}} J^\pi(b_0)

where b0 is the initial belief.

The optimal policy, noted π * yields the highest expected reward value for each belief state, compactly represented by the optimal value function, noted V * . This value function is solution to the Bellman optimality equation:

 V^*(b) = \max_{a\in A}\Bigl[ r(b,a) + \gamma\sum_{o\in O} \Omega(o\mid b,a) V^*(\tau(b,a,o)) \Bigr]

For finite-horizon POMDPs, the optimal value function is piecewise-linear and convex [2]. It can be represented as a finite set of vectors. In the infinite-horizon formulation, a finite vector set can approximate V * arbitrarily closely, whose shape remains convex. Value iteration applies dynamic programming update to gradually improve on the value until convergence to an ε-optimal value function, and preserves its piecewise linearity and convexity [3]. By improving the value, the policy is implicitly improved. Another dynamic programming technique called policy iteration explicitly represents and improves the policy instead [4][

data research courtesy of wikipedia

sci fi reality….string dipp in & / or dipp out

 

dipp in &/or dipp out

By Henryk Szubinski

what is occuring is the dipp of a string that cannot be sustained by symmetry of its reinforced format as a string by the multiple strength of its vector multiple as exiting the parameter of a high gravity event value in a galactic core, where there are usually black holes, but where a non black hole format would comprise the galactical format as a normal type gravity core where the strings that pass through it by a level defined as the galactic plane, have to be applied to with multiples in half way points of a strings history:

this however is not computatable to get a real string theory even by usage of a sinusodial or wave format wavelength:

so to define this problem, the usage of a seperator defined as branes must be usede to define the pull of the strings half way point multiple in speperation and also the subsequent seperations of the brane values by a force spherical seperation on subsequent

1/2 x branes = y / Force 2

data is then usable on the 3rd alteration to measure the convergance of the type trigonometric approach to the process where the data on a interactive value is calculated as the string extension in hyperpsace ,meaning that the string dipp can be calculated as dipping out of the galactic core by a measurable formulation.

GALAXY DIPPING IN THE UNIVERSE:

<————-<—-<————–<—–<–<–<-<-<—<<<<——–

A TYPE EXAMPLE:

PLANARITY OF SPACE TIME IN DIPPING:A example:

///<———–///——<<<<<//////<<<——<<<<//

the environment of the planar spacetime in rectangular format is shown as the galactic zone in its most common format of dimensionality:

the values by which the string are computated are basically resultant in the nose sections of a type interactions of a trigonometric computation where the audial universe scape is shown to be a type 1 tugg that can lift a vechicle uppwards:

basically this model defines the whole universe and the galaxies that go into a dipp, so that what is visible on the sloan survey or the WMP is the invisible spectra of the dipp in proceedure which the gravity core goes through mening that the exit point is dark matter which should be visible by using a triangulation of dark matter points in the universe starting from the left to the right as a type programme:

which would explain the force of exit formats of friction, but due to friction having no specific velocity, the usage of it in a non triangulated primary calculation of the dark matter opened from right to left the remaining data would only contribute to red shifts and the problems of dark matter would increase as a non visibility problem of the adaptations of visual devices gone wrong into a divisive of red shift/ dark matter:

It would then function as a type indicator for superconductability at room temperature on  the complexity of oppositions of:

friction+ velocity =oppositions of a lim x =velocity primarily<—–Dvel /friction secondary—->Df

File:BigBangNoise.jpg

but primarily making the connections of:

a type 1 example:

——————————->—–>>>>——————–>>>>

photon . volume ( input )= triangulations of 3 x  vector .S of the universe horizon )in a photon point build up by the programme of collagulted regions  of the dimensional hyperspace:

= a build up of real dimensionality 10 D

this then should define the basics of the visible universe as untriangulised.

File:M31 Lanoue.png

File:M33HunterWilson09.jpg

B .bit x as a format for a definition of a force as used  in the exchange of a new value definition are in compiled data a similarity of new value definitions= F / 3

reversed UNIVERSAL  force + string reversal =involvement vector (data response)

the usage of the F3 as a new data in usage as functional to a new resultance:

universally, if the triangulum was to be configured into a filure of its spacetime planarity it would fall into a tan value relation that would have been retracted from the milky way;

so that basically the vector of triangulum and the vector of the andromeda galaxy would accellerate into a position where the dark matter computations of a darm matter format = the absorbancy or emissions of drag coefficiency by a destabilised opposition /adjacence.

The values then in a dimensional space time has difficulty in calculating the dark matter formats when the formulation given previoiusly to define the triangulations of the universe from left to right would have a basic indicative point positionality as the opening of the gravity core by the value of

p3 = gS

as such the data on the effects is indicative of a limited vector to the right by the triangulum position and no indications to the process of rotations of the mosco angle in the universe populated by galaxies.