Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 577/1169 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • How can I make a 32 bit render target with a 16 bit alpha channel in DirectX?

    - by J Junker
    I want to create a render target that is 32-bit, with 16 bits each for alpha and luminance. The closest surface formats I can find in the DirectX SDK are: D3DFMT_A8L8 // 16-bit using 8 bits each for alpha and luminance. D3DFMT_G16R16F // 32-bit float format using 16 bits for the red channel and 16 bits for the green channel. But I don't think either of these will work, since D3DFMT_A8L8 doesn't have the precision and D3DFMT_G16R16F doesn't have an alpha channel (I need a separate blend state for alpha). How can I create a render target that allows a separate blend state for luminance and alpha, with 16 bit precision on each channel, that doesn't exceed 32 bits per pixel?

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • C# XNA Make rendered screen a texture2d

    - by redcodefinal
    I am working on a cool little city generator which makes cities in the isometric perspective. However, a problem arose where if the grid size was over a certain limit it would have awful lag. I found the main problem to be in the draw method. So I took the precautionary step of rendering only items that were onscreen. This fixed the lag but, not by much. The idea I have is to render the frame once and take a snapshot. Then, display that as a texture2d on screen. This way I don't have to render 1,000,000 objects every frame since they don't change anyways. TL;DR - I want to Take a snapshot of an already rendered frame Turn it into a Texture2D Render that to the screen instead of all the objects. Any help appreciated.

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • How to do pixel per pixel modeling in unity3d?

    - by Kabumbus
    So generally I want to have api like pixels.addPixel3D(new Pixel3D(0xFF0000, 100, 100,100)); (color, position) where pixels is some abstraction on 3d sceen objet.So to say point cloud. It would have grate use in deep space/stars modeling... I want to set each pixel by hand (having no image base or any automatic thing)... So point is modeling something like Or look at alive flash analog here How to do such thing in unity?

    Read the article

  • How to make my sprite jump properly?

    - by Matthew Morgan
    I'm currently working on a 2D platformer in XNA. I have, however been having some trouble with creating a fully functional jumping algorithm. This is what I have so far: if (keystate.IsKeyDown(Keys.W)) if (onGround = true) //"onground" is true when the collision between the main sprite and the ground is detected { spritePosition.Y = velocity.Y = -5; } So, the problem I am now having is that as soon as the jump starts the variable "onGround" = false and the sprite is brought back the ground by the simple gravity I have implemented. The other problem I have is creating a limit to the height after which the sprite should automatically return to the ground. Any advice or suggestions would be greatly appreciated.

    Read the article

  • How can I solve this SAT direct corner intersection edge case?

    - by ssb
    I have a working SAT implementation, but I am running into a problem where direct collisions at a corner do not work for tiled surfaces. That is, it clips on the surface when going in a certain direction because it gets hung up on one of the tiles, and so, for example, if I walk across a floor while holding both down and left, the player will stop when meeting the next shape because the player will be colliding with the right side rather than with the top of the floor tile. This illustration shows what I mean: The top block will translate right first and then up. I have checked here and here which are helpful, but this does not address what I should do in a situation where I don't have a tile-based world. My usage of the term "tile" before isn't really accurate since what I'm doing here is manually placing square obstacles next to each other, not assigning them spots on a grid. What can I do to fix this?

    Read the article

  • SDL 2.0: is there a library to create 2D particle effects rapidly?

    - by mm24
    I would like to create an light/explosion particle effect using some in built library. I am used to Cocos2D where there are specific classes that you can simply initialize in a certain position and producing a certain particle effect. Is there a way to do so in SDL 2.0 C++? I have found this tutorial but it seems to go for a "build it yoursefl" solution, which is ok but I do not want to re-invent the wheel if someone else has already built it.

    Read the article

  • How can i create sprite sheet from 3d model (3D studio max)

    - by OopsUser
    I built simple 3D model of a car, with simple animation in which it's wheels are turning. Now i want to create a sprite sheet, the only way i know how to do it, is to render manually 20 frames from the from, then combine them to a strip manually, then rotate it by 10 degrees, render 20 frames of animation again and combine them to a strip... Is there a way to do it automatically ? With out rotating the scene manually and render it and combining .. it's a lot of work, takes more time then the modelling itself... Thanks

    Read the article

  • How can be data oriented programming applied for GUI system?

    - by Miro
    I've just learned basics of Data oriented programming design, but I'm not very familiar with that yet. I've also read Pitfalls of Object Oriented Programming GCAP 09. It seems that data oriented programming is much better idea for games, than OOP. I'm just creating my own GUI system and it's completely OOP. I'm thinking if is data oriented programming design applicable for structured things like GUI. The main problem I see is that every type widget has different data, so I can hardly group them into arrays. Also every type of widget renders differently so I still need to call virtual functions.

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance.

    Read the article

  • How to reference or connect a variable to another class without stack overflow?

    - by SystemNetworks
    I really need to re-arrange all my functions. I created a class. All my var, booleans, int, doubles and other things. I created every new variable so they can reference it and so they don't have an error. If your asking why I never just reference my main class vars to my sub-class becuase it will give me stack overflow! When in my main class i link my sub-class. subClass s = new subClass(); Then I reference my fake variable to my real variable for example: This is my sub-class variable(I call it fake) public int x = 0; In my main class, I put it like this: s.x = x; The problem is, it does not work! Maybe this is not the right place but I cant ask any questions on stack overflow because they banned me. If I connect my main class and connect my sub-class it will give me stack overflow. How do I stop it?

    Read the article

  • How to render a retro-like pixel graphics from 3d models?

    - by momijigari
    I was wondering if there's a possibility to render a retro-pixel-like graphics from 3d model in real time? I'm talking about the Starfarer-like graphics. I know it's hand drawn, and it's 2d. But if I need a 3d objects with the same aesthetics? I'm currently working with Flash. But I don't need any ready-solutions, I just want to understand the principle from any other platform if there is one. So if anybody met anything like this I would appreciate your help. (If it's not possible to do in real time, I could at least pre-render a sequence of sprites. It would be much better than creating hundreds of hand-drawn ones)

    Read the article

  • Changing coordinate system from Z-up to Y-up

    - by Jari Komppa
    Blender's coordinate system is different from what I'm used to, in that Z points upwards instead of Y. What would be the simplest way of converting all the world data (so that all animations, texture coordinates, etc still work) so that Y points upwards? Clarification: Object positions are defined as matrices, so just switching translation/rotation/scale information in matrices is not a trivial task.

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • how to keep display tick rate steady when using continuous collision detection?

    - by nas Ns
    (I've just found about this forum). I hope it is ok to repost my question again here. I posted this question at stackoverflow, but it looks like I might get better help here. Here is the question: I've implemented basic particles motion simulation with continuous collision detection. But there is small issue in display. Assume simple case of circles moving inside square. All elastic collisions. no firction. All motion is constant speed. No forces are involved, no gravity. So when a particle is moving, it is always moving at constant speed (in between collisions) What I do now is this: Let the simulation time step be 1 second (for example). This is the time step simulation is advanced before displaying the new state (unless there is a collision sooner than this). At start of each time step, time for the next collision between any particles or a particle with a wall is determined. Call this the TOC time; let’s say TOC was .5 seconds in this case. Since TOC is smaller than the standard time step, then the system is moved by TOC and the new system is displayed so that the new display shows any collisions as just taking place (say 2 circles just touched each other’s, or a circle just touched a wall) Next, the collision(s) are resolved (i.e. speeds updated, changed directions etc..). A new step is started. The same thing happens. Now assume there is no collision detected within the next 1 second (those 2 circles above will not be in collision any more, even though they are still touching, due to their speeds showing they are moving apart now), Hence, simulation time is advanced now by the full one second, the standard time step, and particles are moved on the screen using 1 second simulation time and new display is shown. You see what has just happened: One frame ran for .5 seconds, but the next frame runs for 1 second, may be the 3rd frame is displayed after 2 seconds, may be the 4th frame is displayed after 2.8 seconds (because TOC was .8 seconds then) and so on. What happens is that the motion of a particle on the screen appears to speed up or slow down, even though it is moving at constant speed and was not even involved in a collision. i.e. Looking at one particle on its own, I see it suddenly speeding up or slowing down, becuase another particle had hit a wall. This is because the display tick is not uniform. i.e. the frame rate update is changing, giving the false illusion that a particle is moving at non-constant speed while in fact it is moving at constant speed. The motion on the screen is not smooth, since the screen is not updating at constant rate. I am not able to figure how to fix this. If I want to show 2 particles at the moment of the collision, I must draw the screen at different times. Drawing the screen always at the same tick interval, results in seeing 2 particles before the collision, and then after the collision, and not just when they colliding, which looked bad when I tried it. So, how do real games handle this issue? How to display things in order to show collisions when it happen, yet keep the display tick constant? These 2 requirements seem to contradict each other’s.

    Read the article

  • Simple 2 player server

    - by Sourabh Lal
    I have recently started learning javascript and html and have developed simple 2 player games such as tick-tack-toe, battleship, and dots&boxes. However these 2 player games can only be played on one computer (i.e. the 2 players must sit together) However, I want to modify this so that one can play with a friend on a different computer. Any suggestions on how this is possible? Also since I am a beginner please do not assume that I know all the jargon.

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • Building dynamic bounding box hierachies.

    - by adivasile
    I've been reading about collision detection and I saw that the first part was a coarse detection which generates possible contacts using bounding box hierarchies. I understand the concept of splitting up your objects in groups, to speed up the detection phase, but I'm a little confused on how do you actually build the hierachy, more so on what criteria is used to group them together. Do I iterate through all the objects in the scene, and check the distance between them to see where they should be inserted in the tree? Do you know some resources that may shed some light on this topic for me?

    Read the article

  • Correcting Lighting in Stencil Reflections

    - by Reanimation
    I'm just playing around with OpenGL seeing how different methods of making shadows and reflections work. I've been following this tutorial which describes using GLUT_STENCIL's and MASK's to create a reasonable interpretation of a reflection. Following that and a bit of tweaking to get things to work, I've come up with the code below. Unfortunately, the lighting isn't correct when the reflection is created. glPushMatrix(); plane(); //draw plane that reflection appears on glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glEnable(GL_STENCIL_TEST); glStencilFunc(GL_ALWAYS, 1, 0xFFFFFFFF); glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE); plane(); //draw plane that acts as clipping area for reflection glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); glStencilFunc(GL_EQUAL, 1, 0xFFFFFFFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glDisable(GL_DEPTH_TEST); glPushMatrix(); glScalef(1.0f, -1.0f, 1.0f); glTranslatef(0,2,0); glRotatef(180,0,1,0); sphere(radius, spherePos); //draw object that you want to have a reflection glPopMatrix(); glEnable(GL_DEPTH_TEST); glDisable(GL_STENCIL_TEST); sphere(radius, spherePos); //draw object that creates reflection glPopMatrix(); It looked really cool to start with, then I noticed that the light in the reflection isn't correct. I'm not sure that it's even a simple fix because effectively the reflection is also a sphere but I thought I'd ask here none-the-less. I've tried various rotations (seen above the first time the sphere is drawn) but it doesn't seem to work. I figure it needs to rotate around the Y and Z axis but that's not correct. Have I implemented the rotation wrong or is there a way to correct the lighting?

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >