I continued working on the BLE stuff today, and in thinking about one of the protocol issues I had an idea for making our platform level abstraction more powerful. I’m thinking that perhaps the key to making this more generally useful is to provide a genuine latitude and longitude layer on top of the basic mesh?
I’d summarize the task as figuring out the mesh location and orientation in world coordiantes, i.e. <Lat, Long, Heading> (LLH)
Of course it will also accumulate range information, and all the other useful TS05b stuff, but this extra information would be available to applications that already have databases expressed in terms of LLH.
There are two challenging problems to be solved to be able to do that, but I can imagine that any platform that can do this satisfactorily will have an advantage in the market. Fortunately I made a lot of progress on this during the TS05 work.
Mesh Orientation Problem
The underlying ranging technology is capable of estimating the shape and size of a mesh, and tracking the position of the mobile subject, relative to the mesh – and quite accurately. But the orientation of the mesh on surface of the planet is unknown. So notice that in the following example below: on the left, the subject is facing anchor #2 (e.g. the TV, or an exhibit in a museum); and on the right the subject ia clearly not facing anchor #2.
![]() |
Two valid estimates of the same wireless mesh. |
For a company like Neilson, or a museum, this could lead to a lot of misleading assumptions.
TS always knows which way is North. It can also estimate the magnitude and direction of any acceleration forces in the TS frame of reference. A little matrix algebra produces a vector in the world frame of reference – i.e. relative to North. Adding these vectors together can produce a trajectory. This technique is called deduced reckoning, or DR.
![]() |
A trajectory created by TS05’s IMU |
So in the above example, the subject starts at point A and arrives at point B via a windy path. Notice that the heading of the north vector changes relative to the TS05’s frame of reference, and so does the acceleration vector, and the difference between the headings represents the orientation of the device, not the subject, relative to North.
Although this is not germane to this discussion notice that if we assume that a subject is always upright when walking, and always walks forwards, we can figure out the relative orientation of the TS05 on their body – the so called “body-mount transformation”.
So back to the example. The subject leaves ‘A’, somewhere in the vicinity of anchor #1 and arrives at ‘B’ somewhere in the vicinity of anchor #2. TS05 calculates the ranges to each of the anchors, and uses trilateration to estimate A and B.
![]() |
Position estimates using ranging and trilateration |
For simplicity I have just shown the start and end points of the subjects trajectory.
The start and end points are now known in both the ranging model, and the DR model. The ranger has no orientation estimate, just an un-oriented mesh, but accurate mesh. The DR model has an estimate of the overall heading, path length, and probably the number of paces. By applying a scale and rotation to the mesh, the points A and B can be superimposed.
![]() |
Transforming the mesh |
So now we have a good mesh orientation, and a good path scale value.
In this example we only used the start and end points of a single trajectory. In reality the more trajectories that are used to estimate the mesh orientation, and the trajectory scale the better. A minimization algorithm would be used to find the best fit estimates for both.
Armed with these estimates we can always figure out the subject’s heading in the context of the mesh. We can also use the scale factor to estimate pace length. Or with a good estimate of pace length we can spot ranging inaccuracies.
The world position problem.
————–
DC:
That was a lot of work, and good thinking.