Threading models when talking to hardware devices
- by Fuzz
When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device.
As such, it is common for developers to spin up new threads to handle communications.
It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application.
Certainly it can be common to have two threads per device, one for reading and one for writing.
I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community.
Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread.
Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency.
Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible).
Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business.
If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.