Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 446/1020 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Relative cam movement and momentum on arbitrary surface

    - by user29244
    I have been working on a game for quite long, think sonic classic physics in 3D or tony hawk psx, with unity3D. However I'm stuck at the most fundamental aspect of movement. The requirement is that I need to move the character in mario 64 fashion (or sonic adventure) aka relative cam input: the camera's forward direction always point input forward the screen, left or right input point toward left or right of the screen. when input are resting, the camera direction is independent from the character direction and the camera can orbit the character when input are pressed the character rotate itself until his direction align with the direction the input is pointing at. It's super easy to do as long your movement are parallel to the global horizontal (or any world axis). However when you try to do this on arbitrary surface (think moving along complex curved surface) with the character sticking to the surface normal (basically moving on wall and ceiling freely), it seems harder. What I want is to achieve the same finesse of movement than in mario but on arbitrary angled surfaces. There is more problem (jumping and transitioning back to the real world alignment and then back on a surface while keeping momentum) but so far I didn't even take off the basics. So far I have accomplish moving along the curved surface and the relative cam input, but for some reason direction fail all the time (point number 3, the character align slowly to the input direction). Do you have an idea how to achieve that? Here is the code and some demo so far: The demo: https://dl.dropbox.com/u/24530447/flash%20build/litesonicengine/LiteSonicEngine5.html Camera code: using UnityEngine; using System.Collections; public class CameraDrive : MonoBehaviour { public GameObject targetObject; public Transform camPivot, camTarget, camRoot, relcamdirDebug; float rot = 0; //---------------------------------------------------------------------------------------------------------- void Start() { this.transform.position = targetObject.transform.position; this.transform.rotation = targetObject.transform.rotation; } void FixedUpdate() { //the pivot system camRoot.position = targetObject.transform.position; //input on pivot orientation rot = 0; float mouse_x = Input.GetAxisRaw( "camera_analog_X" ); // rot = rot + ( 0.1f * Time.deltaTime * mouse_x ); // wrapAngle( rot ); // //when the target object rotate, it rotate too, this should not happen UpdateOrientation(this.transform.forward,targetObject.transform.up); camRoot.transform.RotateAround(camRoot.transform.up,rot); //debug the relcam dir RelativeCamDirection() ; //this camera this.transform.position = camPivot.position; //set the camera to the pivot this.transform.LookAt( camTarget.position ); // } //---------------------------------------------------------------------------------------------------------- public float wrapAngle ( float Degree ) { while (Degree < 0.0f) { Degree = Degree + 360.0f; } while (Degree >= 360.0f) { Degree = Degree - 360.0f; } return Degree; } private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; camRoot.transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } float GetOffsetAngle( float targetAngle, float DestAngle ) { return ((targetAngle - DestAngle + 180)% 360) - 180; } //---------------------------------------------------------------------------------------------------------- void OnDrawGizmos() { Gizmos.DrawCube( camPivot.transform.position, new Vector3(1,1,1) ); Gizmos.DrawCube( camTarget.transform.position, new Vector3(1,5,1) ); Gizmos.DrawCube( camRoot.transform.position, new Vector3(1,1,1) ); } void OnGUI() { GUI.Label(new Rect(0,80,1000,20*10), "targetObject.transform.up : " + targetObject.transform.up.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "target euler : " + targetObject.transform.eulerAngles.y.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "rot : " + rot.ToString()); } //---------------------------------------------------------------------------------------------------------- void RelativeCamDirection() { float input_vertical_movement = Input.GetAxisRaw( "Vertical" ), input_horizontal_movement = Input.GetAxisRaw( "Horizontal" ); Vector3 relative_forward = Vector3.forward, relative_right = Vector3.right, relative_direction = ( relative_forward * input_vertical_movement ) + ( relative_right * input_horizontal_movement ) ; MovementController MC = targetObject.GetComponent<MovementController>(); MC.motion = relative_direction.normalized * MC.acceleration * Time.fixedDeltaTime; MC.motion = this.transform.TransformDirection( MC.motion ); //MC.transform.Rotate(Vector3.up, input_horizontal_movement * 10f * Time.fixedDeltaTime); } } Mouvement code: using UnityEngine; using System.Collections; public class MovementController : MonoBehaviour { public float deadZoneValue = 0.1f, angle, acceleration = 50.0f; public Vector3 motion ; //-------------------------------------------------------------------------------------------- void OnGUI() { GUILayout.Label( "transform.rotation : " + transform.rotation ); GUILayout.Label( "transform.position : " + transform.position ); GUILayout.Label( "angle : " + angle ); } void FixedUpdate () { Ray ground_check_ray = new Ray( gameObject.transform.position, -gameObject.transform.up ); RaycastHit raycast_result; Rigidbody rigid_body = gameObject.rigidbody; if ( Physics.Raycast( ground_check_ray, out raycast_result ) ) { Vector3 next_position; //UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); next_position = GetNextPosition( raycast_result.point ); rigid_body.MovePosition( next_position ); } } //-------------------------------------------------------------------------------------------- private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } private Vector3 GetNextPosition( Vector3 current_ground_position ) { Vector3 next_position; // //-------------------------------------------------------------------- // angle = 0; // Vector3 dir = this.transform.InverseTransformDirection(motion); // angle = Vector3.Angle(Vector3.forward, dir);// * 1f * Time.fixedDeltaTime; // // if(angle > 0) this.transform.Rotate(0,angle,0); // //-------------------------------------------------------------------- next_position = current_ground_position + gameObject.transform.up * 0.5f + motion ; return next_position; } } Some observation: I have the correct input, I have the correct translation in the camera direction ... but whenever I attempt to slowly lerp the direction of the character in direction of the input, all I get is wild spin! Sad Also discovered that strafing to the right (immediately at the beginning without moving forward) has major singularity trapping on the equator!! I'm totally lost and crush (I have already done a much more featured version which fail at the same aspect)

    Read the article

  • Register Game Object Components in Game Subsystems? (Component-based Game Object design)

    - by topright
    I'm creating a component-based game object system. Some tips: GameObject is simply a list of Components. There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains pointers to some of Components. GameSubsystem is a very powerful and flexible abstraction: it represents any slice (or aspect) of the game world. There is a need in a mechanism of registering Components in GameSubsystems (when GameObject is created and composed). There are 4 approaches: 1: Chain of responsibility pattern. Every Component is offered to every GameSubsystem. GameSubsystem makes a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components. pro. Components know nothing about how they are used. Low coupling. A. We can add new GameSubsystem. For example, let's add GameSubsystemTitles that registers all ComponentTitle and guarantees that every title is unique and provides interface to quering objects by title. Of course, ComponentTitle should not be rewrited or inherited in this case. B. We can reorganize existing GameSubsystems. For example, GameSubsystemAudio, GameSubsystemRender, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms). con. Every-to-every check. Very innefficient. con. Subsystems know about Components. 2: Each Subsystem searches for Components of specific types. pro. Better performance than in Approach 1. con. Subsystems still know about Components. 3: Component registers itself in GameSubsystem(s). We know at compile-time that there is a GameSubsystemRenderer, so let's ComponentImageRender will call something like GameSubsystemRenderer::register(ComponentRenderBase*). pro. Performance. No unnecessary checks as in Approach 1. con. Components are badly coupled with GameSubsystems. 4: Mediator pattern. GameState (that contains GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch. Questions: Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about implementation of Approach 4? Thank you.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • How to get the Exact Collision Point and ignore the collision (from 2 "ghost bodies")

    - by Moritz
    I have a very basic problem with Box2D. For a arenatype game where you can throw scriptable "missiles" at other players I decided to use Box2D for the collision detection between the players and the missiles. Players and missiles have their own circular shape with a specific size (varying). But I don´t want to use dynamic bodies because the missiles need to move themselve in any way they want to (defined in the script) and shouldnt be resolved unless the script wants it. The behavior I look for is as following (for each time step): velocity of missiles is set by the specific missile script each missile is moved according to that velocity if a collision accurs now, I want to get the exact position of impact, and now I need a mechanism to decide if the missile should just ignore the collision (for example collision between two fireballs which shouldnt interact) or take it (so they are resolved and dont overlap anymore) So is there a way in Box2D to create Ghost bodies and listen to collisions from them, then deciding if they should ignore the collision or should take them and resolve their position? I hope I was clear enough and would be happy about any help!

    Read the article

  • how to transform child elements position into a world position

    - by MrGreg
    So Im making a 2d space game and I have a bunch of spaceships that have turrets. Objects have a position and orientation, the ships being in world coordinates while the turrets are children and coordinates are relative to their parents. How do I efficiently calculate the position of a turret in world coordinates (i.e. when it fires and I need to know where to place a bullet in the world)? Calculating the turrets orientation is trivial - I just add the turrets relative angle to its parents. For position though, I guess I could do a bunch of trigonometry but this MUST be a common problem with a good/fast general solution? Should I be relearning how to do matrix math again? :) btw - Im creating the game in javascript+canvas but its the math/algorithm im interested in here Cheers, Greg

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • Getting isometric grid coordinates from standard X,Y coordinates

    - by RoryHarvey
    I'm currently trying to add sprites to an isometric Tiled TMX map using Objects in cocos2d. The problem is the X and Y metadata from TMX object are in standard 2d format (pixels x, pixels y), instead of isometric grid X and Y format. Usually you would just divide them by the tile size, but isometric needs some sort of transform. For example on a 64x32 isometric tilemap of size 40 tiles by 40 tiles an object at (20,21)'s coordinates come out as (640,584) So the question really is what formula gets (20,21) from (640,584)?

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • How to convert from wav or mp3 to raw PCM [on hold]

    - by Komyg
    I am developing a game using Cocos2d-X and Marmalade SDK, and I am looking for any recommendations of programs that can convert audio files in mp3 or wav format to raw PCM 16 format. The problem is that I am using the SimpleAudioEngine class to play sounds in my game and in Marmalade it only supports files that are encoded as raw PCM 16. Unfortunately I've been having a very hard time finding a program that can do this type of conversion, so I am looking for a recommendation.

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Ogre3D : seeking advices about game files management

    - by Tibor
    I'm working on a new game, and its related level editor, based on Ogre3D. I was thinking about how i could manage the game files, knowing that Ogre use .mesh files for models, .material for materials/texture information etc... . At first i thought about a common .zip folder decompressed at runtime (the same way Torchlight and Ogre samples do). But this way the game assets become a monolithic archive, loading takes time, and could be difficult to eventually patch them. So, let's say i have a game object named "Cube" i want to load in my program. Going for modularity, what if i create a compressed file (using zlib compression routines) named Cube.extname, containing its sub-files Cube.mesh, Cube.material and so on ? Are there any alternatives or should i stick with compressed objects? PS: Just to clear things, the answer is unrelated to my program code, at the moment i'm using "resources.cfg" pointing to the OgreSDK media directory.

    Read the article

  • Finding shapes in 2D Array, then optimising

    - by assemblism
    I'm new so I can't do an image, but below is a diagram for a game I am working on, moving bricks into patterns, and I currently have my code checking for rotated instances of a "T" shape of any colour. The X and O blocks would be the same colour, and my last batch of code would find the "T" shape where the X's are, but what I wanted was more like the second diagram, with two "T"s Current result      Desired Result [X][O][O]                [1][1][1] [X][X][_]                [2][1][_] [X][O][_]                [2][2][_] [O][_][_]                [2][_][_] My code loops through x/y, marks blocks as used, rotates the shape, repeats, changes colour, repeats. I have started trying to fix this checking with great trepidation. The current idea is to: loop through the grid and make note of all pattern occurrences (NOT marking blocks as used), and putting these to an array loop through the grid again, this time noting which blocks are occupied by which patterns, and therefore which are occupied by multiple patterns. looping through the grid again, this time noting which patterns obstruct which patterns That much feels right... What do I do now? I think I would have to try various combinations of conflicting shapes, starting with those that obstruct the most other patterns first.How do I approach this one? use the rational that says I have 3 conflicting shapes occupying 8 blocks, and the shapes are 4 blocks each, therefore I can only have a maximum of two shapes. (I also intend to incorporate other shapes, and there will probably be score weighting which will need to be considered when going through the conflicting shapes, but that can be another day) I don't think it's a bin packing problem, but I'm not sure what to look for. Hope that makes sense, thanks for your help

    Read the article

  • Sprite Animation in Android with OpenGL ES

    - by lijo john
    How to do a sprite animation in android using OpenGL ES? What i have done : Now I am able to draw a rectangle and apply my texture(Spritesheet) to it What I need to know : Now the rectangle shows the whole sprite sheet as a whole How to show a single action from sprite sheet at a time and make the animation It will be very help full if anyone can share any idea's , links to tutorials and suggestions. Advanced Thanks to All

    Read the article

  • Best way to do large XNA animations?

    - by Harold
    What's the best way to have large animations in XNA 4.0? I have created a spritesheet with the sprite being 250x400 (more of an image than a sprite but hey ho) and there are approximately 45 frames in the animation. This causes problems for XNA as it says that the maximum filesize for Reach is 2048. I'd rather not change to hidef as I heard that means that your game is less compatible with some computers and systems so does anyone have any idea what the best thing I could do is? The only thing I could come up with is to have a list of textures to flick through but that's not ideal.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • 2D wave-like sprite movement XNA

    - by TheBroodian
    I'm trying to create a particle that will 'circle' my character. When the particle is created, it's given a random position in relation to my character, and a box to provide boundaries for how far left or right this particle should circle. When I use the phrase 'circle', I'm referring to a simulated circling, i.e., when moving to the right, the particle will appear in front of my character, when passing back to the left, the particle will appear behind my character. That may have been too much context, so let me cut to the chase: In essence, the path I would like my particle to follow would be akin to a sine wave, with the left and right sides of the provided rectangle being the apexes of the wave. The trouble I'm having is that the position of the particle will be random, so it will never be produced at the same place within the wave twice, but I have no idea how to create this sort of behavior procedurally.

    Read the article

  • Collision and Graphics integration

    - by Shlomi Atia
    I'm a little confused about the integration between collision and graphics. They both need to share the same position in the world. The most obvious choice is the center of the entity, which is good for bounding volumes and fixed sized sprites. However, for characters with variable height size sprites like this: http://gamemedia.wcgame.ru/data/2011-07-17/game-sprite-sheet.jpg This is no longer good. The character won't align to the ground if I'll draw it from the center. I can just make the sprites the same height, but it will be a waste of memory (the largest sprite is 4 times larger then the smallest one). Even then, this is not an option at all with skeletal sprites like this one: http://user-generated-content.java-gaming.org/img-vault/212a171fc1ebb27ab77608fb9b2dd9bd9205361ce6300b21a7f8d06d025fbbd8.png It seems that the graphics need to be drawn from the ground for characters, but not for other images such as scenery and obstacles. The only solution I could think of was having another position called draw-position, which is the entity center for images, and is the the bottom of the collision volume for characters. Then when I draw relative to that position, it should work properly. I haven't found any references for something like that, so I'm kinda insecure about it. Does anyone knows of a better approach for this problem? Thanks

    Read the article

  • Exporting .jar files with Jarsplice

    - by SystemNetworks
    Help! I'm Using Mac OS X 10.8 Mountain Lion and Using Eclipse. I'm using the library called Slick and Lwjgl. When i first exported it, it has a .jar file. I followed some You Tube Tutorials (Different, they don't have slick) It worked for them. I don't know why it dosen't work for me. Should i put Slick-util too? I didn't even use lwjgl btw. Please help!!! Jars I used(Libraries) Slick LWJGL(I didn't use it) Tutorials I followed TheCodingUniverse(Exporting) TheNewBoston(The Code and Set-up) Programs I used Eclipse IDE Java Jarsplice No warnings found or errors. It is perfect! But Nothing shows up in the screen everytime I pressed the jar(After Jarsplice) Help!!!

    Read the article

  • Assets.getBytes returns null in test environment

    - by ashes999
    I'm using the latest Haxe (2.10), NME (3.4.3), and MUnit. I've written some unit tests that need to fetch bitmap data from SWF symbols. The first step is to actually load the SWF data. To do this, I use NME's getByteArray along with the swf library, like so: var blah:SWF = new SWF(Assets.getBytes("assets/swf/test.swf")); The call to Assets.getBytes returns null when I'm running this under MUnit. When running my actual game code, I'm able to get the byte array (and consequentially, instantiate the SWF class). Am I doing something wrong? What am I missing? Edit: My directory structure is: . (root .\assets .\assets\*.png (other images) .\assets\swf\*.swf (SWFs) .\Source\*.hx (source code) .\Test\*.hx (tests)

    Read the article

  • Creating smooth lighting transitions using tiles in HTML5/JavaScript game

    - by user12098
    I am trying to implement a lighting effect in an HTML5/JavaScript game using tile replacement. What I have now is kind of working, but the transitions do not look smooth/natural enough as the light source moves around. Here's where I am now: Right now I have a background map that has a light/shadow spectrum PNG tilesheet applied to it - going from darkest tile to completely transparent. By default the darkest tile is drawn across the entire level on launch, covering all other layers etc. I am using my predetermined tile sizes (40 x 40px) to calculate the position of each tile and store its x and y coordinates in an array. I am then spawning a transparent 40 x 40px "grid block" entity at each position in the array The engine I'm using (ImpactJS) then allows me to calculate the distance from my light source entity to every instance of this grid block entity. I can then replace the tile underneath each of those grid block tiles with a tile of the appropriate transparency. Currently I'm doing the calculation like this in each instance of the grid block entity that is spawned on the map: var dist = this.distanceTo( ig.game.player ); var percentage = 100 * dist / 960; if (percentage < 2) { // Spawns tile 64 of the shadow spectrum tilesheet at the specified position ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 64 ); } else if (percentage < 4) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 63 ); } else if (percentage < 6) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 62 ); } // etc... (sorry about the weird spacing, I still haven't gotten the hang of pasting code in here properly) The problem is that like I said, this type of calculation does not make the light source look very natural. Tile switching looks too sharp whereas ideally they would fade in and out smoothly using the spectrum tilesheet (I copied the tilesheet from another game that manages to do this, so I know it's not a problem with the tile shades. I'm just not sure how the other game is doing it). I'm thinking that perhaps my method of using percentages to switch out tiles could be replaced with a better/more dynamic proximity forumla of some sort that would allow for smoother transitions? Might anyone have any ideas for what I can do to improve the visuals here, or a better way of calculating proximity with the information I'm collecting about each tile? (PS: I'm reposting this from Stack Overflow at someone's suggestion, sorry about the duplicate!)

    Read the article

  • In a state machine, is it a good idea to separate states and transitions?

    - by codablank1
    I have implemented a small state machine in this way (in pseudo code): class Input {} class KeyInput inherits Input { public : enum { Key_A, Key_B, ..., } } class GUIInput inherits Input { public : enum { Button_A, Button_B, ..., } } enum Event { NewGame, Quit, OpenOptions, OpenMenu } class BaseState { String name; Event get_event (Input input); void handle (Event e); //event handling function } class Menu inherits BaseState{...} class InGame inherits BaseState{...} class Options inherits BaseState{...} class StateMachine { public : BaseState get_current_state () { return current_state; } void add_state (String name, BaseState state) { statesMap.insert(name, state);} //raise an exception if state not found BaseState get_state (String name) { return statesMap.find(name); } //raise an exception if state or next_state not found void add_transition (Event event, String state_name, String next_state_name) { BaseState state = get_state(state_name); BaseState next_state = get_state(next_state_name); transitionsMap.insert(pair<event, state>, next_state); } //raise exception if couple not found BaseState get_next_state(Event event, BaseState state) { return transitionsMap.find(pair<event, state>); } void handle(Input input) { Event event = current_state.get_event(input) current_state.handle(event); current_state = get_next_state(event, current_state); } private : BaseState current_state; map<String, BaseState> statesMap; //map of all states in the machine //for each couple event/state, this map stores the next state map<pair<Event, BaseState>, BaseState> transitionsMap; } So, before getting the transition, I need to convert the key input or GUI input to the proper event, given the current state; thus the same key 'W' can launch a new game in the 'Menu' state or moving forward a character in the 'InGame' state; Then I get the next state from the transitionsMap and I update the current state Does this configuration seem valid to you ? Is it a good idea to separate states and transitions ? And I have some kind of trouble to represent a 'null state' or a 'null event'; What initial value can I give to the current state and which one should be returned by get_state if it fails ?

    Read the article

  • Physics not synchronizing correctly over the network when using Bullet

    - by Lucas
    I'm trying to implement a client/server physics system using Bullet however I'm having problems getting things to sync up. I've implemented a custom motion state which reads and write the transform from my game objects and it works locally but I've tried two different approaches for networked games: Dynamic objects on the client that are also on the server (eg not random debris and other unimportant stuff) are made kinematic. This works correctly but the objects don't move very smoothly Objects are dynamic on both but after each message from the server that the object has moved I set the linear and angular velocity to the values from the server and call btRigidBody::proceedToTransform with the transform on the server. I also call btCollisionObject::activate(true); to force the object to update. My intent with method 2 was to basically do method 1 but hijacking Bullet to do a poor-man's prediction instead of doing my own to smooth out method 1, but this doesn't seem to work (for reasons that are not 100% clear to me even stepping through Bullet) and the objects sometimes end up in different places. Am I heading in the right direction? Bullet seems to have it's own interpolation code built-in. Can that help me make method 1 work better? Or is my method 2 code not working because I am accidentally stomping that?

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >