Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 496/1332 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • DOT implementation

    - by Denis Ermolin
    I have some DOT(damage over time) implementation problems. My game runs on 30 FPS speed. Current implementation is: let's say hero cast spell which make 1 damage per second. So on every frame i do (pseudo code): damage_done = getRandomDamage() * delta_time; I accumulate damage and when it becomes more then 0 then subtract rounded damage from current health and so on. With 30 FPS and 1 DPS it will be 1/33 = 0.05... We know that floats a not precise enough to sum 30 circulating decimals and have exact 1 in the end. But HP is discrete value and that's why 1 DPS will not have 1 damage after 1 second because value will be 0.9999..... It's not so big deal when you have 100000 DPS - +/- 1 damage will not be noticeable. But if i have 1, 5 DPS? How modern RPG's implemented DOT's?

    Read the article

  • Convex Hull for Concave Objects

    - by Lighthink
    I want to implement GJK and I want it to handle concave shapes too (almost all my shapes are concave). I've thought of decomposing the concave shape into convex shapes and then building a hierarchical tree out of convex shapes, but I do not know how to do it. Nothing I could find on the Internet about it wasn't satisfying my needs, so maybe someone can point me in the right direction or give a full explanation.

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • Relative cam movement and momentum on arbitrary surface

    - by user29244
    I have been working on a game for quite long, think sonic classic physics in 3D or tony hawk psx, with unity3D. However I'm stuck at the most fundamental aspect of movement. The requirement is that I need to move the character in mario 64 fashion (or sonic adventure) aka relative cam input: the camera's forward direction always point input forward the screen, left or right input point toward left or right of the screen. when input are resting, the camera direction is independent from the character direction and the camera can orbit the character when input are pressed the character rotate itself until his direction align with the direction the input is pointing at. It's super easy to do as long your movement are parallel to the global horizontal (or any world axis). However when you try to do this on arbitrary surface (think moving along complex curved surface) with the character sticking to the surface normal (basically moving on wall and ceiling freely), it seems harder. What I want is to achieve the same finesse of movement than in mario but on arbitrary angled surfaces. There is more problem (jumping and transitioning back to the real world alignment and then back on a surface while keeping momentum) but so far I didn't even take off the basics. So far I have accomplish moving along the curved surface and the relative cam input, but for some reason direction fail all the time (point number 3, the character align slowly to the input direction). Do you have an idea how to achieve that? Here is the code and some demo so far: The demo: https://dl.dropbox.com/u/24530447/flash%20build/litesonicengine/LiteSonicEngine5.html Camera code: using UnityEngine; using System.Collections; public class CameraDrive : MonoBehaviour { public GameObject targetObject; public Transform camPivot, camTarget, camRoot, relcamdirDebug; float rot = 0; //---------------------------------------------------------------------------------------------------------- void Start() { this.transform.position = targetObject.transform.position; this.transform.rotation = targetObject.transform.rotation; } void FixedUpdate() { //the pivot system camRoot.position = targetObject.transform.position; //input on pivot orientation rot = 0; float mouse_x = Input.GetAxisRaw( "camera_analog_X" ); // rot = rot + ( 0.1f * Time.deltaTime * mouse_x ); // wrapAngle( rot ); // //when the target object rotate, it rotate too, this should not happen UpdateOrientation(this.transform.forward,targetObject.transform.up); camRoot.transform.RotateAround(camRoot.transform.up,rot); //debug the relcam dir RelativeCamDirection() ; //this camera this.transform.position = camPivot.position; //set the camera to the pivot this.transform.LookAt( camTarget.position ); // } //---------------------------------------------------------------------------------------------------------- public float wrapAngle ( float Degree ) { while (Degree < 0.0f) { Degree = Degree + 360.0f; } while (Degree >= 360.0f) { Degree = Degree - 360.0f; } return Degree; } private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; camRoot.transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } float GetOffsetAngle( float targetAngle, float DestAngle ) { return ((targetAngle - DestAngle + 180)% 360) - 180; } //---------------------------------------------------------------------------------------------------------- void OnDrawGizmos() { Gizmos.DrawCube( camPivot.transform.position, new Vector3(1,1,1) ); Gizmos.DrawCube( camTarget.transform.position, new Vector3(1,5,1) ); Gizmos.DrawCube( camRoot.transform.position, new Vector3(1,1,1) ); } void OnGUI() { GUI.Label(new Rect(0,80,1000,20*10), "targetObject.transform.up : " + targetObject.transform.up.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "target euler : " + targetObject.transform.eulerAngles.y.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "rot : " + rot.ToString()); } //---------------------------------------------------------------------------------------------------------- void RelativeCamDirection() { float input_vertical_movement = Input.GetAxisRaw( "Vertical" ), input_horizontal_movement = Input.GetAxisRaw( "Horizontal" ); Vector3 relative_forward = Vector3.forward, relative_right = Vector3.right, relative_direction = ( relative_forward * input_vertical_movement ) + ( relative_right * input_horizontal_movement ) ; MovementController MC = targetObject.GetComponent<MovementController>(); MC.motion = relative_direction.normalized * MC.acceleration * Time.fixedDeltaTime; MC.motion = this.transform.TransformDirection( MC.motion ); //MC.transform.Rotate(Vector3.up, input_horizontal_movement * 10f * Time.fixedDeltaTime); } } Mouvement code: using UnityEngine; using System.Collections; public class MovementController : MonoBehaviour { public float deadZoneValue = 0.1f, angle, acceleration = 50.0f; public Vector3 motion ; //-------------------------------------------------------------------------------------------- void OnGUI() { GUILayout.Label( "transform.rotation : " + transform.rotation ); GUILayout.Label( "transform.position : " + transform.position ); GUILayout.Label( "angle : " + angle ); } void FixedUpdate () { Ray ground_check_ray = new Ray( gameObject.transform.position, -gameObject.transform.up ); RaycastHit raycast_result; Rigidbody rigid_body = gameObject.rigidbody; if ( Physics.Raycast( ground_check_ray, out raycast_result ) ) { Vector3 next_position; //UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); next_position = GetNextPosition( raycast_result.point ); rigid_body.MovePosition( next_position ); } } //-------------------------------------------------------------------------------------------- private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } private Vector3 GetNextPosition( Vector3 current_ground_position ) { Vector3 next_position; // //-------------------------------------------------------------------- // angle = 0; // Vector3 dir = this.transform.InverseTransformDirection(motion); // angle = Vector3.Angle(Vector3.forward, dir);// * 1f * Time.fixedDeltaTime; // // if(angle > 0) this.transform.Rotate(0,angle,0); // //-------------------------------------------------------------------- next_position = current_ground_position + gameObject.transform.up * 0.5f + motion ; return next_position; } } Some observation: I have the correct input, I have the correct translation in the camera direction ... but whenever I attempt to slowly lerp the direction of the character in direction of the input, all I get is wild spin! Sad Also discovered that strafing to the right (immediately at the beginning without moving forward) has major singularity trapping on the equator!! I'm totally lost and crush (I have already done a much more featured version which fail at the same aspect)

    Read the article

  • Annoying flickering of vertices and edges (possible z-fighting)

    - by Belgin
    I'm trying to make a software z-buffer implementation, however, after I generate the z-buffer and proceed with the vertex culling, I get pretty severe discrepancies between the vertex depth and the depth of the buffer at their projected coordinates on the screen (i.e. zbuffer[v.xp][v.yp] != v.z, where xp and yp are the projected x and y coordinates of the vertex v), sometimes by a small fraction of a unit and sometimes by 2 or 3 units. Here's what I think is happening: Each triangle's data structure holds the plane's (that is defined by the triangle) coefficients (a, b, c, d) computed from its three vertices from their normal: void computeNormal(Vertex *v1, Vertex *v2, Vertex *v3, double *a, double *b, double *c) { double a1 = v1 -> x - v2 -> x; double a2 = v1 -> y - v2 -> y; double a3 = v1 -> z - v2 -> z; double b1 = v3 -> x - v2 -> x; double b2 = v3 -> y - v2 -> y; double b3 = v3 -> z - v2 -> z; *a = a2*b3 - a3*b2; *b = -(a1*b3 - a3*b1); *c = a1*b2 - a2*b1; } void computePlane(Poly *p) { double x = p -> verts[0] -> x; double y = p -> verts[0] -> y; double z = p -> verts[0] -> z; computeNormal(p -> verts[0], p -> verts[1], p -> verts[2], &p -> a, &p -> b, &p -> c); p -> d = p -> a * x + p -> b * y + p -> c * z; } The z-buffer just holds the smallest depth at the respective xy coordinate by somewhat casting rays to the polygon (I haven't quite got interpolation right yet so I'm using this slower method until I do) and determining the z coordinate from the reversed perspective projection formulas (which I got from here: double z = -(b*Ez*y + a*Ez*x - d*Ez)/(b*y + a*x + c*Ez - b*Ey - a*Ex); Where x and y are the pixel's coordinates on the screen; a, b, c, and d are the planes coefficients; Ex, Ey, and Ez are the eye's (camera's) coordinates. This last formula does not accurately give the exact vertices' z coordinate at their projected x and y coordinates on the screen, probably because of some floating point inaccuracy (i.e. I've seen it return something like 3.001 when the vertex's z-coordinate was actually 2.998). Here is the portion of code that hides the vertices that shouldn't be visible: for(i = 0; i < shape.nverts; ++i) { double dist = shape.verts[i].z; if(z_buffer[shape.verts[i].yp][shape.verts[i].xp].z < dist) shape.verts[i].visible = 0; else shape.verts[i].visible = 1; } How do I solve this issue? EDIT I've implemented the near and far planes of the frustum, with 24 bit accuracy, and now I have some questions: Is this what I have to do this in order to resolve the flickering? When I compare the z value of the vertex with the z value in the buffer, do I have to convert the z value of the vertex to z' using the formula, or do I convert the value in the buffer back to the original z, and how do I do that? What are some decent values for near and far? Thanks in advance.

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • How to perform efficient 2D picking in HTML5?

    - by jSepia
    I'm currently using an R-Tree for both picking and collision testing. Each entity on screen has a bounding box for collisions and a separate one for picking. Since entities may change position very frequently, both trees must be updated/reordered once per frame. While this is very efficient for collisions, because the tree is used in hundreds of collision queries every frame, I'm finding it too costly for picking, because it only gets queried when the user clicks, thus leading to a lot of wasted tree updates. What would be a more efficient way to implement picking without as much overhead?

    Read the article

  • How do I morph between meshes that have different vertex counts?

    - by elijaheac
    I am using MeshMorpher from the Unify wiki in my Unity project, and I want to be able to transform between arbitrary meshes. This utility works best when there are an equal number of vertices between the two meshes. Is there some way to equalize the vertex count between a set of meshes? I don't mean that this would reduce the vertex count of a mesh, but would rather add redundant vertices to any meshes with smaller counts. However, if there is an alternate method of handling this (other than increasing vertices), I would like to know.

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • Switching between levels, re-initialize existing structure or create new one?

    - by Martino Wullems
    This is something I've been wondering for quite a while. When building games that exist out of multiple levels (platformers, shmups etc) what is the prefered method to switch between the levels? Let's say we have a level class that does the following: Load data for the level design (tiles), enemies, graphics etc. Setup all these elements in their appriopate locations and display them Start physics and game logic I'm stuck between the following 2 methods: 1: Throw away everything in the level class and make a new one, we have to load an entirely new level anyway! 2: pause the game logic and physics, unload all currents assets, then re-initialize those components with the level data for the new level. They both have their pros and cons. Method 1 is alot easier and seems to make sense since we have to redo everything anyway. But method 2 allows you to re-use exisiting elements which might save resources and allows for a smoother transfer to the new level.

    Read the article

  • Interpolation gives the appearance of collisions

    - by Akroy
    I'm implementing a simple 2D platformer with a constant speed update of the game logic, but with the rendering done as fast as the machine can handle. I interpolate positions between actual game updates by just using the position and velocity of objects at the last update. This makes things look really smooth in general, but when something hits a wall/floor, it appears to go through the wall for a moment before being positioned correctly. This is because the interpolator is not taking walls into account, so it guesses the position into walls until the actual game update fixes it. Are there any particularly elegant solutions for this? Simply increasing the update rate seems like a band-aid solution, and I'm trying to avoid increasing the system reqs. I could also check for collisions in the actual interpolator, but that seems like heavy overhead, and then I'm no longer dividing the drawing and the game updating.

    Read the article

  • In a browser, is it best to use one huge spritesheet or many (10000) different PNG's?

    - by Nick
    I'm creating a game in jQuery, where I use about 10000 32x32 tiles. Until now, I have been using them all separately (no sprite sheet). An average map uses about 2000 tiles (sometimes re-used PNG's but all separate divs) and the performance ranges from stable (Chrome) to a bit laggy (Firefox). Each of these divs are positioned absolutely using CSS. They do not need to be updated every tick, just when a new map is loaded. Would it be better for performance to use spritesheet methods for the divs using CSS background-positioning, like gameQuery does? Thank you in advance!

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • Remove gravity from single body

    - by Siddharth
    I have multiple bodies in my game world in andengine. All the bodies affected by gravity but in that I want my specific body does not affected by the gravity. For that solution after research I found that I have to use body.setGravityScale(0) method for my problem solution. But in my andengine extension I don't found that method so please provide guidance about how get access about that method. Also for the above problem any other guidance will be acceptable. Thank You! I apply following code for reverse gravity final Vector2 vec = new Vector2(0, -SensorManager.GRAVITY_EARTH * bulletBody.getMass()); bulletBody.applyForce(vec, bulletBody.getWorldCenter());

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Ogre3D : seeking advices about game files management

    - by Tibor
    I'm working on a new game, and its related level editor, based on Ogre3D. I was thinking about how i could manage the game files, knowing that Ogre use .mesh files for models, .material for materials/texture information etc... . At first i thought about a common .zip folder decompressed at runtime (the same way Torchlight and Ogre samples do). But this way the game assets become a monolithic archive, loading takes time, and could be difficult to eventually patch them. So, let's say i have a game object named "Cube" i want to load in my program. Going for modularity, what if i create a compressed file (using zlib compression routines) named Cube.extname, containing its sub-files Cube.mesh, Cube.material and so on ? Are there any alternatives or should i stick with compressed objects? PS: Just to clear things, the answer is unrelated to my program code, at the moment i'm using "resources.cfg" pointing to the OgreSDK media directory.

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Physics not synchronizing correctly over the network when using Bullet

    - by Lucas
    I'm trying to implement a client/server physics system using Bullet however I'm having problems getting things to sync up. I've implemented a custom motion state which reads and write the transform from my game objects and it works locally but I've tried two different approaches for networked games: Dynamic objects on the client that are also on the server (eg not random debris and other unimportant stuff) are made kinematic. This works correctly but the objects don't move very smoothly Objects are dynamic on both but after each message from the server that the object has moved I set the linear and angular velocity to the values from the server and call btRigidBody::proceedToTransform with the transform on the server. I also call btCollisionObject::activate(true); to force the object to update. My intent with method 2 was to basically do method 1 but hijacking Bullet to do a poor-man's prediction instead of doing my own to smooth out method 1, but this doesn't seem to work (for reasons that are not 100% clear to me even stepping through Bullet) and the objects sometimes end up in different places. Am I heading in the right direction? Bullet seems to have it's own interpolation code built-in. Can that help me make method 1 work better? Or is my method 2 code not working because I am accidentally stomping that?

    Read the article

  • Unity3D Android : Game Over/Retry

    - by user3666251
    Im making a simple 2D game for android using the Unity3D game engine.I created all the levels and everything but Im stuck at making the game over/retry menu.So far I've been using new scenes as a game over menu.I used this simple script : pragma strict var level = Application.LoadLevel; function OnCollisionEnter(Collision : Collision) { if(Collision.collider.tag == "Player") { Application.LoadLevel("GameOver"); } } And this as a 'menu' : #pragma strict var myGUISkin : GUISkin; var btnTexture : Texture; function OnGUI() { GUI.skin = myGUISkin; if (GUI.Button(Rect(Screen.width/2-60,Screen.height/2+30,100,40),"Retry")) Application.LoadLevel("Easy1"); if (GUI.Button(Rect(Screen.width/2-90,Screen.height/2+100,170,40),"Main Menu")) Application.LoadLevel("MainMenu"); } The problem stands at the part where I have to create over 200 game over scenes,obscales(the objects that kill the player) and recreate the same script over 200 times for each level. Is there any other way to make this faster and less painful? I've been searching the web but didn't find anything useful according to my issue. Thank you.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >