Video/ Speech Development of Applications
- by idea_
Why do we continue to type and click away in IDEs when we could theoretically use hand gestures and speech to develop applications?
Think about it - Developing a class by standing in-front of your computer, making some gesture, and yelling "CAR!". This doesn't have to strictly apply to OOP either.
We have sufficient speech and image acquisition/ processing and analysis tools available to us, don't we?
This seems plausible to me, but I may be overly ambitious.
From a conceptual point-of-view, do you see any problems with the implementation?