How to sort my paws?

Posted by Ivo Flipse on Stack Overflow See other posts from Stack Overflow or by Ivo Flipse
Published on 2010-12-21T18:32:19Z Indexed on 2010/12/28 18:54 UTC
Read the original article Hit count: 390

Filed under:
|

In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws:

alt text

I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind).

As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated.

My initial thought was to use heuristics to do the sorting, like:

  • There's a ~60-40% ratio in weight bearing between the front and hind paws;
  • The hind paws are generally smaller in surface;
  • The paws are (often) spatially divided in left and right.

However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own.

Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like.

Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence.

alt text

So I'm looking for a better way to sort the results with their corresponding paw.

For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time).

To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted.

Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed.

And for anyone interested, I’m keeping a blog with all the updates regarding the project!

© Stack Overflow or respective owner

Related posts about python

Related posts about image-processing