How do people prove the correctness of Computer Vision methods?
Posted
by
solvingPuzzles
on Stack Overflow
See other posts from Stack Overflow
or by solvingPuzzles
Published on 2012-04-06T23:20:05Z
Indexed on
2012/04/06
23:29 UTC
Read the original article
Hit count: 358
I'd like to pose a few abstract questions about computer vision research. I haven't quite been able to answer these questions by searching the web and reading papers.
- How does someone know whether a computer vision algorithm is correct?
- How do we define "correct" in the context of computer vision?
- Do formal proofs play a role in understanding the correctness of computer vision algorithms?
A bit of background: I'm about to start my PhD in Computer Science. I enjoy designing fast parallel algorithms and proving the correctness of these algorithms. I've also used OpenCV from some class projects, though I don't have much formal training in computer vision.
I've been approached by a potential thesis advisor who works on designing faster and more scalable algorithms for computer vision (e.g. fast image segmentation). I'm trying to understand the common practices in solving computer vision problems.
© Stack Overflow or respective owner