I continued working on the BLE stuff today, and in thinking about one of the protocol issues I had an idea for making our platform level abstraction more powerful.  I’m thinking that perhaps the key to making this more generally useful is to provide a genuine latitude and longitude layer on top of the basic mesh?

I’d summarize the task as figuring out the mesh location and orientation in world coordiantes, i.e. <Lat, Long, Heading> (LLH)

Of course it will also accumulate range information, and all the other useful TS05b stuff, but this extra information would be available to applications that already have databases expressed in terms of LLH.

There are two challenging problems to be solved to be able to do that, but I can imagine that any platform that can do this satisfactorily will have an advantage in the market.  Fortunately I made a lot of progress on this during the TS05 work.

Mesh Orientation Problem

The underlying ranging technology is capable of estimating the shape and size of a mesh, and tracking the position of the mobile subject, relative to the mesh – and quite accurately.  But the orientation of the mesh on surface of the planet is unknown.  So notice that in the following example below: on the left, the subject is facing anchor #2 (e.g. the TV, or an exhibit in a museum); and on the right the subject ia clearly not facing anchor #2.

Two valid estimates of the same wireless mesh.

For a company like Neilson, or a museum, this could lead to a lot of misleading assumptions.

TS always knows which way is North.  It can also estimate the magnitude and direction of any acceleration forces in the TS frame of reference.  A little matrix algebra produces a vector in the world frame of reference – i.e. relative to North.  Adding these vectors together can produce a trajectory.  This technique is called deduced reckoning, or DR.

A trajectory created by TS05’s IMU

So in the above example, the subject starts at point A and arrives at point B via a windy path.  Notice that the heading of the north vector changes relative to the TS05’s frame of reference, and so does the acceleration vector, and the difference between the headings represents the orientation of the device, not the subject, relative to North.

Although this is not germane to this discussion notice that if we assume that a subject is always upright when walking, and always walks forwards, we can figure out the relative orientation of the TS05 on their body – the so called “body-mount transformation”.

So back to the example.  The subject leaves ‘A’, somewhere in the vicinity of anchor #1 and arrives at ‘B’ somewhere in the vicinity of anchor #2.  TS05 calculates the ranges to each of the anchors, and uses trilateration to estimate A and B.

Position estimates using ranging and trilateration

For simplicity I have just shown the start and end points of the subjects trajectory.

The start and end points are now known in both the ranging model, and the DR model.  The ranger has no orientation estimate, just an un-oriented mesh, but accurate mesh.  The DR model has an estimate of the overall heading, path length, and probably the number of paces.  By applying a scale and rotation to the mesh, the points A and B can be superimposed.

Transforming the mesh

So now we have a good mesh orientation, and a good path scale value.

In this example we only used the start and end points of a single trajectory.  In reality the more trajectories that are used to estimate the mesh orientation, and the trajectory scale the better.  A minimization algorithm would be used to find the best fit estimates for both.

Armed with these estimates we can always figure out the subject’s heading in the context of the mesh.  We can also use the scale factor to estimate pace length.  Or with a good estimate of pace length we can spot ranging inaccuracies.

The world position problem.

One bit of information that is still missing from our model is the position of the mesh in the world coordinate system.  Is it in Green Street, San Francisco, or Trafalgar Road, Cambridge, UK.  For most of the applications we have considered so far this in not an issue, but if these systems are spread over a campus, where each mesh is out of range of another mesh then it will be helpful if they automatically identified their location.
I have two solutions to this problem.  
Solution 1 is to ensure that at least one node contains a GPS, and has LOS to the satellites.  Perhaps a solution is build a TS06 version that can stick to a window.  It could perhaps be powered by sunlight?
Solution 2 is to use the subject’s phone and some DR software to build a trajectory between the last good GPS coordinate and the first mesh node.  This seems cheaper, but technically harder and less reliable than putting a GPS in each anchor.  Some anchors won’t get any signal, but maybe one will and that might be enough if fixes can be averaged before the battery runs down.
————–
DC:
That was a lot of work, and good thinking.

I have a basic understanding. But I wouldn’t be able to explain it back to you.
GPS chips seem to be cheap and ubiquitous. However, I remember the experience we had on Sylvian Way with the GPS board you got from Sparkfun. It seems to me it never did lock on to enough satellites to get a fix. Nevertheless, GPS seems to be the way to go for something like you described. Using a phone would be a nightmare to get consistent results. A window-mounted, solar powered anchor would be perfect. The GPS radio would power up only when there is bright light.
I have a big pile of wires on my workbench cut to length and stripped to start soldering onto modules. I need to order my Segger.
——————-
MGL:
Good progress. How tedious to solder all those wires.  I’m hoping that the Segger arrives shortly after you are done, and that the flashing instructions i sent you are correct.

Leave a Reply