Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 522/1292 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Drawing order in XNA

    - by marc wellman
    When manually setting the drawing order of game components by setting int DrawableGameComponent.DrawOrder can one use any integer numbers as long an order is defined like component1 = drawing order: 2 component2 = drawing order: 5 component3 = drawing order: 10 component4 = drawing order: 323 or do these integers have to be consecutive and starting with zero like component1 = drawing order: 0 component2 = drawing order: 1 component3 = drawing order: 2 component4 = drawing order: 3 ?

    Read the article

  • Where to start learning OpenGL with C++?

    - by NERDcustard
    I'm 16 years old and my name is Norbert. I have learnt C++ and made some cool text based games and such but I would love to start graphic's programming. I'm a decent artiest (I will have some of my work bellow) I know the base of C++ but I really would like to get into OpenGL. I need someone to show me some good tutorials for OpenGl with C++ so I can really get into game dev. My goal is to be able to program a simple 2d game by the end of the year and I have lots of time to do so. I'm en-rolled in a game dev next year and really need some help with starting off. http://imgur.com/QZjKX http://imgur.com/3CZy7

    Read the article

  • Are there existing FOSS component-based frameworks?

    - by Tesserex
    The component based game programming paradigm is becoming much more popular. I was wondering, are there any projects out there that offer a reusable component framework? In any language, I guess I don't care about that. It's not for my own project, I'm just curious. Specifically I mean are there projects that include a base Entity class, a base Component class, and maybe some standard components? It would then be much easier starting a game if you didn't want to reinvent the wheel, or maybe you want a GraphicsComponent that does sprites with Direct3D, but you figure it's already been done a dozen times. A quick Googling turns up Rusher. Has anyone heard of this / does anyone use it? If there are no popular ones, then why not? Is it too difficult to make something like this reusable, and they need heavy customization? In my own implementation I found a lot of boilerplate that could be shoved into a framework.

    Read the article

  • What's a viable way to get public properties from child objects?

    - by Raven Dreamer
    I have a GameObject (RoomOrganizer in the picture below) with a "RoomManager" script, and one or more child objects, each with a 'HasParallelagram' component attached, likeso: I've also got the following in the aforementioned "RoomManager" void Awake () { Rect tempRect; HasParallelogram tempsc; foreach (Transform child in transform) { try { tempsc = child.GetComponent<HasParallelogram>(); tempRect = tempsc.myRect; blockedZoneList.Add(new Parallelogram(tempRect)); Debug.Log(tempRect.ToString()); } catch( System.NullReferenceException) { Debug.Log("Null Reference Caught"); } } } Unfortunately, attempting to assign tempRect = tempsc.myRect causes a null pointer at run time. Am I missing some crucial step? HasParallelgram is an empty script with a public Rect set in the editor and nothing else. What's the proper way to get a child's component?

    Read the article

  • What sort of leaderboard for my game?

    - by Martin
    I recently published a word game for Windows Phone and I am really happy to have some players. The game is entirely offline and at the end of a game, the player's score is published to a server. I'm collecting the scores to build a leaderboard. Right now, I don't believe that the leaderboard I offer to my users is appropriate. I essentially accumulate the score of all the games of a user for a given day and that becomes their score. So if Player 1 plays 3 games and gets 100, 150 and 200 points, its score for the day is 450 points. I would like to get your ideas and opinion. How do I keep my game challenging and engaging with a good leaderboard? Should I continue accumulating the score for a day? Should I just keep the best score? Thanks!

    Read the article

  • Issue with a point coordinates, which creates an unwanted triangle

    - by Paul
    I would like to connect the points from the red path, to the y-axis in blue. I figured out that the problem with my triangles came from the first point (V0) : it is not located where it should be. In the console, it says its location is at 0,0, but in the emulator, it is not. The code : for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } The output : _polyVertices[i-1].x : 0.000000, _polyVertices[i-1].y : 0.000000 _polyVertices[i].x : 50.000000, _polyVertices[i].y : 0.000000 And the result : (the layer goes up, i could not take the screenshot before the layer started to go up, but the first red point starts at y=0) : Then it creates an unwanted triangle when the code continues : Would you have any idea about this? (So to force the first blue point to start at 0,0, and not at 50,0 as it seems to be now) Here is the code : - (void)generatePath{ float x = 50; //first red point float y = 0; for(int i = 0; i < kMaxKeyPoints+1; i++) { if (i<3){ _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<20){ //going right _hillKeyPoints[i] = CGPointMake(x, y); x += (random() % (int) 30); y += -40; } else if(i<25){ //stabilize _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<30){ //going left _hillKeyPoints[i] = CGPointMake(x, y); //x -= (random() % (int) 10); x = 150 + (random() % (int) 30); y += -40; } else { //back to normal _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } } } -(void)generatePolygons{ static int prevFromKeyPointI = -1; static int prevToKeyPointI = -1; // key points interval for drawing while (_hillKeyPoints[_fromKeyPointI].y > -_offsetY+winSizeTop) { _fromKeyPointI++; } while (_hillKeyPoints[_toKeyPointI].y > -_offsetY-winSizeBottom) { _toKeyPointI++; } if (prevFromKeyPointI != _fromKeyPointI || prevToKeyPointI != _toKeyPointI) { _nPolyVertices = 0; float x1 = 0; int keyPoints = _fromKeyPointI; for (int i=_fromKeyPointI; i<_toKeyPointI; i++){ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); //first blue point _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //CCLOG(@"_nPolyVertices V2 again : %i", _nPolyVertices); //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } prevFromKeyPointI = _fromKeyPointI; prevToKeyPointI = _toKeyPointI; } } - (void) draw { //RED glColor4f(1, 1, 1, 1); for(int i = MAX(_fromKeyPointI, 1); i <= _toKeyPointI; ++i) { glColor4f(1.0, 0, 0, 1.0); ccDrawLine(_hillKeyPoints[i-1], _hillKeyPoints[i]); } //BLUE glColor4f(0, 0, 1, 1); for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } } Thanks

    Read the article

  • Asked to make a 2d platformer [on hold]

    - by Fendorio
    I've been tasked with creating a simple 2D platformer top be put on a webpage. The game is pretty much a simple Super Mario type game. I've been playing around with C# and C++ now for a couple years, so I'm aware that Unity offers a route to making a web game but for such a simplistic project i'm afraid that using unity would be overkill... i.e. slow, nobody wants to install the web player for a game with < 5 mins playtime. Html5 canvas/JS seems to jump out at me over flash, as that seems to be being pushed out. Any suggestions on a route to take would be greatly appreciated

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • Basic game architechture best practices in Cocos2D on iOS

    - by MrDatabase
    Consider the following simple game: 20 squares floating around an iPhone's screen. Tapping a square causes that square to disappear. What's the "best practices" way to set this up in Cocos2D? Here's my plan so far: One Objective-c GameState singleton class (maintains list of active squares) One CCScene (since there's no menus etc) One CCLayer (child node of the scene) Many CCSprite nodes (one for each square, all child nodes of the layer) Each sprite listens for a tap on itself. Receive tap = remove from GameState Since I'm relatively new to Cocos2D I'd like some feedback on this design. For example I'm unsure of the GameState singleton. Perhaps it's unnecessary.

    Read the article

  • Find meeting point of 2 objects in 2D, knowing (constant) speed and slope

    - by Axonn
    I have a gun which fires a projectile which has to hit an enemy. The problem is that the gun has to be automatic, i.e. - choose the angle in which it has to shoot so that the projectile hits the enemy dead in the center. It's been a looooong time since school, and my physics skills are a bit rusty, but they're there. I've been thinking to somehow apply the v = d/t formula to find the time needed for the projectile or enemy to reach a certain point. But the problem is that I can't find the common point for both the projectile and enemy. Yes, I can find a certain point for the projectile, and another for the enemy, but I would need lots of tries to find where the point coincides, which is stupid. There has to be a way to link them together but I can't figure it out. I prepared some drawings and samples: A simple version of my Flash game, dumbed down to the basics, just some shapes: http://axonnsd.org/W/P001/MathSandBox.swf - click the mouse anywhere to fire a projectile. Or, here is an image which describes my problem: So... who has any ideas about how to find x3/y3 - thus leading me to find the angle in which the weapon has to tilt in order to fire a projectile to meet the enemy? EDIT I think it would be clearer if I also mention that I know: the speed of both Enemy and Projectile and the Enemy travels on a straight vertical line.

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • Set a drawing viewport while using camera

    - by Mariano
    I'm working with XNA. I already have a basic world made of tiles and a camera using a transform matrix. I have a character moving around and the camera follows. What I want to do now is draw the map only on a certain part of the screen as shown on the figure below. This way I can move the map to the left of the screen and have the other fixed parts shift to the right. Do I need to modify the camera matrix? Make a new viewport?

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • How can I deal with actor translations and other "noise" in third-party motion capture data?

    - by Charles
    I'm working on a game, and I've run into a problem with motion capture data. My team is using 3DS Max 2011 and trying to put free motion capture files on our models. The problem we're having is it has become extremely hard to find motion capture data that stays in place. We've found some great motion captures of things like walking and jumping but the actors themselves move within the data, so when we attach these animations to our models and bring them into XNA, the models walk forward even when they should technically be standing still (and then there's also the problem of them resetting at the end of the animation). How can we clean up, at runtime or asset-processing time, the animation in these motion capture files?

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Issues with shooting in a HTML5 platformer game

    - by fnx
    I'm coding a 2D sidescroller using only JavaScript and HTML5 canvas, and in my game I have two problems with shooting: 1) Player shoots continous stream of bullets. I want that player can shoot only a single bullet even though the shoot-button is being held down. 2) Also, I get an error "Uncaught TypeError: Cannot call method 'draw' of undefined" when all the bullets are removed. My shooting code goes like this: When player shoots, I do game.bullets.push(new Bullet(this, this.scale)); and after that: function Bullet(source, dir) { this.id = "bullet"; this.width = 10; this.height = 3; this.dir = dir; if (this.dir == 1) { this.x = source.x + source.width - 5; this.y = source.y + 16; } if (this.dir == -1) { this.x = source.x; this.y = source.y + 16; } } Bullet.prototype.update = function() { if (this.dir == 1) this.x += 8; if (this.dir == -1) this.x -= 8; for (var i in game.enemies) { checkCollisions(this, game.enemies[i]); } // Check if bullet leaves the viewport if (this.x < game.viewX * 32 || this.x > (game.viewX + game.tilesX) * 32) { removeFromList(game.bullets, this); } } Bullet.prototype.draw = function() { // bullet flipping uses orientation of the player var posX = game.player.scale == 1 ? this.x : (this.x + this.width) * -1; game.ctx.scale(game.player.scale, 1); game.ctx.drawImage(gameData.getGfx("bullet"), posX, this.y); } I handle removing with this function: function removeFromList(list, object) { for (i in list) { if (object == list[i]) { list.splice(i, 1); break; } } } And finally, in the main game loop I have this: for (var i in game.bullets) { game.bullets[i].update(); game.bullets[i].draw(); } I have tried adding if (game.bullets.length > 0) to the main game loop before the above draw&update calls, but I still get the same error.

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • How do I create a curved line or filled circle or generally a circle using C++/SDL?

    - by NoobScratcher
    Hello I've been trying for ages to make a pixel circle using the putpixel function provided by SDL main website here is that function : void putpixel(int x,int y , int color , SDL_Surface* surface) { unsigned int *ptr = static_cast <unsigned int *> (surface->pixels); int offset = y * (surface->pitch/sizeof(unsigned int)); ptr[offset + x] = color; } and my question is how do I curve a line or create an circle arc of pixels or any other curved shape then a rectangle or singular pixel or line. for example here are some pictures filled pixel circle below enter link description here now my idea was too change the x and y value of the pixel position using + and - to create the curves but in practice didn't provide the correct results what my results are in this is to be able to create a circle that is made out of pixels only nothing else. thank you for anyone who takes the time to read this question thanks! :D

    Read the article

  • JavaScript 3D space ship rotation

    - by user36202
    I am working with a fairly low-level JavaScript 3D API (not Three.js) which uses euler angles for rotation. In most cases, euler angles work quite well for doing things like aligning buildings, operating a hovercraft, or looking around in the first-person. However, in space there is no up or down. I want to control the ship's roll, pitch, and yaw. To do that, some people would use a local coordinate system or a permenant matrix or quaternion or whatever to represent the ship's angle. However, since I am not most people and am using a library that deals exclusively in euler angles, I will be using relative angles to represent how to rotate the ship in space and getting the resulting non-relative euler angles. For you math nerds, that means I need to convert 3 euler angles (with Y being the vertical axis, X representing the pitch, and Z representing a roll which is unaffected by the other angles, left-handed system) into a 3x3 orientation matrix, do something fancy with the matrix, and convert it back into the 3 euler angles. Euler to matrix to euler. Somebody has posted something similar to this on SO (http://stackoverflow.com/questions/1217775/rotating-a-spaceship-model-for-a-space-simulator-game) but he ended up just working with a matrix. This will not do for me. Also, I am using JavaScript, not C++. What I want essentially are the functions do_roll, do_pitch, and do_yaw which only take in and put out euler angles. Many thanks.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >