I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing.
these are the methods I've though about so far:
first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera.
then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer.
the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth?
I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above.
Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.