Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 456/1027 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Automatically triggering standard spaceship controls to stop its motion

    - by Garan
    I have been working on a 2D top-down space strategy/shooting game. Right now it is only in the prototyping stage (I have gotten basic movement) but now I am trying to write a function that will stop the ship based on it's velocity. This is being written in Lua, using the Love2D engine. My code is as follows (note- object.dx is the x-velocity, object.dy is the y-velocity, object.acc is the acceleration, and object.r is the rotation in radians): function stopMoving(object, dt) local targetr = math.atan2(object.dy, object.dx) if targetr == object.r + math.pi then local currentspeed = math.sqrt(object.dx*object.dx+object.dy*object.dy) if currentspeed ~= 0 then object.dx = object.dx + object.acc*dt*math.cos(object.r) object.dy = object.dy + object.acc*dt*math.sin(object.r) end else if (targetr - object.r) >= math.pi then object.r = object.r - object.turnspeed*dt else object.r = object.r + object.turnspeed*dt end end end It is implemented in the update function as: if love.keyboard.isDown("backspace") then stopMoving(player, dt) end The problem is that when I am holding down backspace, it spins the player clockwise (though I am trying to have it go the direction that would be the most efficient at getting to the angle it would have to be) and then it never starts to accelerate the player in the direction opposite to it's velocity. What should I change in this code to get that to work? EDIT : I'm not trying to just stop the player in place, I'm trying to get it to use it's normal commands to neutralize it's existing velocity. I also changed math.atan to math.atan2, apparently it's better. I noticed no difference when running it, though.

    Read the article

  • Finding shapes in 2D Array, then optimising

    - by assemblism
    I'm new so I can't do an image, but below is a diagram for a game I am working on, moving bricks into patterns, and I currently have my code checking for rotated instances of a "T" shape of any colour. The X and O blocks would be the same colour, and my last batch of code would find the "T" shape where the X's are, but what I wanted was more like the second diagram, with two "T"s Current result      Desired Result [X][O][O]                [1][1][1] [X][X][_]                [2][1][_] [X][O][_]                [2][2][_] [O][_][_]                [2][_][_] My code loops through x/y, marks blocks as used, rotates the shape, repeats, changes colour, repeats. I have started trying to fix this checking with great trepidation. The current idea is to: loop through the grid and make note of all pattern occurrences (NOT marking blocks as used), and putting these to an array loop through the grid again, this time noting which blocks are occupied by which patterns, and therefore which are occupied by multiple patterns. looping through the grid again, this time noting which patterns obstruct which patterns That much feels right... What do I do now? I think I would have to try various combinations of conflicting shapes, starting with those that obstruct the most other patterns first.How do I approach this one? use the rational that says I have 3 conflicting shapes occupying 8 blocks, and the shapes are 4 blocks each, therefore I can only have a maximum of two shapes. (I also intend to incorporate other shapes, and there will probably be score weighting which will need to be considered when going through the conflicting shapes, but that can be another day) I don't think it's a bin packing problem, but I'm not sure what to look for. Hope that makes sense, thanks for your help

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Setting Krypton Light to Screen Pixels

    - by Adam Jerrett
    So a few days back, I started playing around with Krypton XNA for 2D lighting in my game. I noticed in general, that spawning a light at (0,0) with Krypton causes the light to appear in, pretty much, the centre of the game screen. Is there any way to change this so a Krypton light's "starting point" at [0,0] would spawn at the top left of the screen, and thus follow the standard screen co-ordinates for position? I ask because currently I'm busy working on my game where my spawn point is [512,512]. With hard code, the closest I've got to the light being "central" to this point is the vector position [12,-20], which makes no sense and is impossible to craft, mathematically, if I want the light to move with the camera (the position [480,512] maps roughly to [10,-20]). So, is there any way to "normalise" the krypton lights to use standard screen co-ordinates? If you guys can, play around with the demo from the site and please see if you can find anything out about it. Documentation on the engine is rather scarce, so it's difficult to find anything relevant to my "pixel-perfect" need. It might just also be something in the code with regards to the matrices that I'm not fully understanding. Any help would be useful. Thanks.

    Read the article

  • Keeping rotation between two objects

    - by user99
    In my XNA game I have two objects that collide. When the first object collides with the other it is able to latch on to it and move it about the world. I am having a problem with the math here (Math isn't my strong point). I currently have the second object latch on to the first and move around with it, but I cannot get it to keep it's original direction. So, if the object is facing up it should keep this direction relative to how it is being rotated with the original item. Any tips on how I could best to achieve this?

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • Beginner question about vertex arrays in OpenGL

    - by MrDatabase
    Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this: glBindTexture(GL_TEXTURE_2D, texName); glVertexPointer(2, GL_FLOAT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, coordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to: glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL_TRIANGLES to glDrawArrays instead of GL_TRIANGLE_STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here :-) Cheers!

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • How much server bandwidth does an average RTS game require per month?

    - by Nat Weiss
    My friend and I are going to write a multiplayer, multiplatform RTS game and are currently analyzing the costs of going with a client-server architecture. The game will have a small map with mostly characters, not buildings (think of DotA or League of Legends). The authoritative game logic will run on the server and message packet sizes will be highly optimized. We'd like to know approximately how much server bandwidth our proposed RTS game would use on a monthly basis, considering these theoretical constants: 100 concurrent users maximum 8 players maximum per game 10 ticks per second Bonus: If you can tell us approximately how much server RAM this kind of game would use that would also help a great deal. Thanks in advance.

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • 2D vector graphic html5 framework

    - by Yury
    I trying to find html5 game framework by following criteria: 1)Real good performance. 2)Good support of vector graphic( objects which contains canvas elements -line, rec,bezierCurve etc.) 3)Easy port to mobile. Optional- Physics Engine. I found 1)Pixi.js- it looks like real good, but i didn't find any info about "vector objects" support. 2) i found "vector objects" support in paper.js I need something like these: http://paperjs.org/examples/chain/ and http://paperjs.org/examples/path-intersections/. But it looks like paper.js- not so good performance as pixi.js. And it is not game engine. Is there any good framework meets these requirements? P.S. I found similar question here Which free HTML5-based game engine meets these requirements?. But it was a long time ago. A lot of new things were created since 2011.

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Simpler alternative to AngelScript

    - by Vee
    I want to give players the ability to create and share bullet patterns for a shoot'em up. The pattern scripts should have all the common programming stuff like loops, if/else, variables, and so on. But in the end, I just want them to call a "spawn bullet at X, Y with Z angle and A speed" in the C++ game. To spawn a circle of bullets, the user should only have to write a script with a for loop that goes from 0 to 360 and calls the spawn bullet function on every iteration. I tried integrating AngelScript, but I am getting nowhere - it looks way to complex for a simple task like this one. Is there an easy to integrate library that can solve my problem? Thanks.

    Read the article

  • Target tracking with a small delay (actionscript 3.0)

    - by John Dodson
    I'm having trouble thinking of a good method to track my character with an enemy attack. Of course, I don't want the attack to track my character's current position; I want it to track where the character was about 1 second before (so you can move around and make the attack miss and loop around you sort of a thing). The general structure of my game uses a timer to update all of my events. I have a timer going off every 25 milliseconds that updates everything, including my player's position and the enemies position. Right now I just have the enemy attack directly targeting my character....which works fine except that it's impossible to escape =p. Let me know if I didn't supply enough details. My approach was going to basically be get my character's position from about 1 second ago, then have the enemy target that position, the only problem is I can't think of a good way to get my character's position from previous times. Thanks for the help!

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • XNA clip plane effect makes models black

    - by user1990950
    When using this effect file: float4x4 World; float4x4 View; float4x4 Projection; float4 ClipPlane0; void vs(inout float4 position : POSITION0, out float4 clipDistances : TEXCOORD0) { clipDistances.x = dot(position, ClipPlane0); clipDistances.y = 0; clipDistances.z = 0; clipDistances.w = 0; position = mul(mul(mul(position, World), View), Projection); } float4 ps(float4 clipDistances : TEXCOORD0) : COLOR0 { clip(clipDistances); return float4(0, 0, 0, 0); } technique { pass { VertexShader = compile vs_2_0 vs(); PixelShader = compile ps_2_0 ps(); } } all models using this are rendered black. Is it possible to render them correctly?

    Read the article

  • How to move from home page screen to the next menu screen on clicking a particular image in XNA4.0?

    - by Raj
    I m new 2 XNA game pgming(also C#)....I want 2 create a main page with some buttons and on clicking a particular button, it should goto another screen whr there r some buttons to select which should inturn goto the game screen on clicking....Whether I can put all the codes in the "game1.cs" or create new class for every page....Pls help... I've jus went through some pages in "Learning xna4.0" by o'reilly...If there s any other gud tutorials, pls suggest me...

    Read the article

  • What are cons of usage only non-member functions and POD?

    - by Miro
    I'm creating my own game engine. I've read these articles and this question about DOD and there was written to not use member functions and classes. I also heard some criticism to this idea. I can write it using member functions or non-member functions it would be similar. So what are benefits/cons of that approach or when project grows, does any of these approaches give clearer and better manageable code? With POD & non-member functions I don't have to make struct members public I can still use object id outside of engine like OpenGL does with all it's stuff, so It's not about encapsulation. POD - plain old data DOD - data oriented design

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >