Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 490/1022 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • Box 2D Collision Question

    - by Farooq Arshed
    I am very new to Box 2D Physics world. I wanted to know how to collide 2 bodies when one is Dynamic and other is Kinematic. The whole Scenario is explained below: I have 3 balls in total. I want to balls to remain in their places and the third ball to be able to move. When the third ball hits the other two balls then they should move according to the speed and direction from which they were hit. My gravity of the world is 0 because I only want z-axis gravity. I would also like some one to point me towards some good tutorials regarding Box 2D basics which is language independent. I hope I have explained my scenario well. Thanks for the help in advance.

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • How to have operations with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

  • Need to make animation whereby the character shatters into a bunch of pieces

    - by theprojectabot
    I would like to take a 3d character model, cut out a bunch of shapes (or a bunch of triangles in the shape of the pieces I want) and then have the pieces separate from each other at the beginning of the animation and fall apart with gravity so it looks like the model is falling apart in shattered pieces. Is there a way to run a script on a mesh, cut out these pieces, instantiate all of them as separate models and then run gravity on them during the simulation?

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • How do I find the angle required to point to another object?

    - by Ginamin
    I am making an air combat game, where you can fly a ship in a 3D space. There is an opponent that flies around as well. When the opponent is not on screen, I want to display an arrow pointing in the direction the user should turn, as such: So, I took the camera location and the oppenent location and did this: double newDirection = atan2(activeCamera.location.y-ship_wrap.location.y, activeCamera.location.x-ship_wrap.location.x); After which, I get the position on the circumferance of a circle which surrounds my crosshairs, like such: trackingArrow.position = point((60*sin(angle)+240),60*cos(angle)+160); It all works fine, except it's the wrong angle! I assume my calculation for the new direction is incorrect. Can anyone help?

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • Finding the endpoint of a named bone in Irrlicht

    - by Boreal
    I'm making a tank game that will have multiple tanks. I want to be able to define the weapon placements using bones that I can add right inside the modelling program (Blender to be exact). All tanks will have a bone called Body and a bone called Turret, and then names like Cannon0 and PickupGun for where the shots will be fired from that are attached to the Turret bone. Is there some way to find the absolute end position of a bone that I choose by name?

    Read the article

  • How do I draw a scrolling background?

    - by droidmachine
    How can I draw background tile in my 2D side-scrolling game? Is that loop logical for OpenGL es? My tile 2400x480. Also I want to use parallax scrolling for my game. batcher.beginBatch(Assets.background); for(int i=0; i<100; i++) batcher.drawSprite(0+2400*i, 240, 2400, 480, Assets.backgroundRegion); batcher.endBatch(); UPDATE And thats my onDrawFrame.I'm sending deltaTime for fps control. public void onDrawFrame(GL10 gl) { GLGameState state = null; synchronized(stateChanged) { state = this.state; } if(state == GLGameState.Running) { float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); screen.present(deltaTime); } if(state == GLGameState.Paused) { screen.pause(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } if(state == GLGameState.Finished) { screen.pause(); screen.dispose(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } }

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • why would you use textures that are not a power of 2?

    - by Will
    In the early days of OpenGL and DirectX, it was required that textures were powers of two. This meant that interpolation of float values could be done very quickly using shifting and such. Since OpenGL 2.0, and preceding that via an extension, non-power-of-two texture dimensions has been supported. Are there performance advantages to sticking to power-of-two textures on modern integrated and discrete GPUs? What advantages do non-power-of-two textures have, if any? Are there large populations of desktop users who don't have cards that support non-power-of-two textures?

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • Previewing a Demo Level in Mobile for UDK?

    - by Reno Yeo
    I've already clicked on "Emulate Mobile Features" and everything has been compiled. I've also set the mobile previewer settings to iPhone 4's dimensions and features. However, when i click on the mobile previewer, a new window pops up but it goes into a "Not Responding" mode after a while. Is there anything I'm doing wrong? To be honest, I'm afraid of the difficulty curve required in learning UDK, but I am interested in developing a game for it.

    Read the article

  • Estimate angle to launch missile, maths question

    - by Jonathan
    I've been working on this for an hour or two now and my maths really isn't my strong suit which is definitely not a good thing for a game programmer but that shouldn't stop me enjoying a hobby surely? After a few failed attempts I was hoping someone else out there could help so here's the situation. I'm trying to implement a bit of faked intelligence when the A.I fires it's missiles at a target in a 2D game world. By predicting the likely position the target will be in given it's current velocity and the time it will take the missile to reach it's target. I created an image to demonstrate my thinking: http://i.imgur.com/SFmU3.png which also contains the logic I use for accelerating the missile after launch. The ship that fires the missile can fire within a total of 40 degree angle, 20 either side of itself, but this could likely become variable. My current attempt was to break the space between the two lines into segments which match the targets width. Then calculate the time it would take the missile to get to that location using the formula. So for each iteration of this we total up the values and that tells us the distance travelled, ad it would then just need compared to distance to the segment. startVelocity * ((startVelocity * acceleration)^(currentframe-1) So for example. If we start at a velocity of 1f/frame with an acceleration of 0.1f the formula, at frame 4, would be 1 * (1.1^3) = 1.331 But I quickly realized I was getting lost when trying to put this into practice. Does this seem like a correct starting point or am I going completely the wrong way about it? Any pointers would help me greatly. Maths really isn't my strong suit so I get easily lost in these matters and don't even really know a good phrase to search for with this. So I guess in summary my question is more about the correct way to approach this problem and any additional code samples on top of that would be great but I'm not averse to working out the complete code from helpful pointers.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >