Let's say you have an input file with many entries like these:
date, ticker, open, high, low, close, <and some other values>
And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...).
Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on.
Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link)
What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data.
What's the first thing that comes to mind?
I chose a doubly-linked circular linked list that has the following methods:
push_front()
push_back()
pop_front()
pop_back()
[] //overloaded, can be used with negative parameters
But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it.
So, I don't know if I made my question very clear -- but the main points are:
What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access?
What kind of data structures should be considered when classifying extrema of a set of data points?