Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance.
Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me:
Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2!
So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude.
Here are two points, purple at (2,2) and green at (3,3).
We can take two points away from each other to derive a vector.
Let's create a vector to describe the magnitude and direction of purple from green:
|d| = purple - green
|d| = (purple.x, purple.y) - (green.x, green.y)
|d| = (2, 2) - (3, 3)
|d| = <-1,-1>
Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement:
euc_magnitude = sqrt((x*x)+(y*y))
euc_magnitude = sqrt((-1*-1)+(-1*-1))
euc_magnitude = sqrt((1)+(1))
euc_magnitude = sqrt(2)
euc_magnitude = 1.41
Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple.
But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit:
So what do we do to the magnitude to allow us to calculate real distances on our point graph?
Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!