Search Results

Search found 21194 results on 848 pages for 'game state'.

Page 351/848 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • libgdx arrays onTouch() method and delays for objects

    - by johnny-b
    i am trying to create random bullets but it is not working for some reason. also how can i make a delay so the bullets come every 30 seconds or 1 minute???? also the onTouch method does not work and it is not taking the bullet away???? shall i put the array in the GameRender class? thanks public class GameWorld { public static Ball ball; private Bullet bullet1; private ScrollHandler scroller; private Array<Bullet> bullets = new Array<Bullet>(); public GameWorld() { ball = new Ball(280, 273, 32, 32); bullet = new Bullet(-300, 200); scroller = new ScrollHandler(0); bullets.add(new Bullet(bullet.getX(), bullet.getY())); bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 0.0f; float bulletY = 0.0f; for (int i=0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); bullets.add(bullet); } } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet1() { return bullet1; } } i also tried this and it is not working, i used this in the GameRender class Array<Bullet> enemies=new Array<Bullet>(); //in the constructor of the class enemies.add(new Bullet(bullet.getX(), bullet.getY())); // this throws an exception for some reason??? this is in the render method for(int i=0; i<bullet.size; i++) bullet.get(i).draw(batcher); //this i am using in any method that will allow me from the constructor to update to render for(int i=0; i<bullet.size; i++) bullet.get(i).update(delta); this is not taking the bullet out @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { for(int i=0; i<bullet.size; i++) if(bullet.get(i).getBounds().contains(screenX,screenY)) bullet.removeIndex(i--); return false; } thanks for the help anyone.

    Read the article

  • Finding closest object to a location within a specific perpendicular distance to direction vector

    - by Sniper
    I have a location and a direction vector indicating facing, I want to find the closest object to that location that is within some tolerance distance (perpendicular distance) to the ray formed by the location and direction vector. Basically I want to get the object that is being aimed at. I have thought about finding all objects within a box and then finding the closest object to my vector from them results, but I am sure that there is a more efficient way. The Z axis is optional, the objects are most likely within a few meters of the search vector.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Simple project - make a 3D box tumble and fall to the ground [closed]

    - by Dominic Bou-Samra
    Possible Duplicate: Resources to learn programming rigid body simulation Hi guys, I want to try learning rigid-body dynamic simulation. I have done a fluid and cloth simulation before, but never anything rigid. My maths knowledge is limited in that I don't know the notation that well. Are there any good cliff-notes, tutorials, guides on how I would accomplish a simple task like this? I don't want a super complex pdf that's only a little relevant. Thanks.

    Read the article

  • Need a bounding box for CCSprite that includes all children/subchildren

    - by prototypical
    I have a CCSprite that has CCSprite children, and those CCSprite children have CCSprite children. The contentSize property doesn't seem to include all children/subchildren, and seems to only work for the base node. I could write a recursive method to traverse a CCSprite for all children/subchildren and calculate a proper boundingbox, but am curious as to if I am missing something and it's possible to get that information without doing so. I'l be a little surprised if such a method doesn't exist, but I can't seem to find it.

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Best way to detect if vec3 is between vec3(x) and vec3(y) in glsl

    - by elect
    As titled I am sampling from a texture and if the color is somehow gray [vec3(.8), vec3(.9)] and an uniform is 1 I need to substitute that color with another one I am not a glsl veteran but I am pretty sure there is a more elegant and compact (without mentioning faster) way than this: vec3 textureColor = texture(texture0, oUV); if(settings.w == 1 && textureColor.r > .8 && textureColor.r < .9 && textureColor.g > .8 && textureColor.g < .9 && textureColor.b > .8 && textureColor.b < .9)

    Read the article

  • Tips and Tools for creating Spritesheet animations

    - by Spooks
    I am looking for a tool that I can use to create sprite sheet easily. Right now I am using Illustrator, but I can never get the center of the character in the exact position, so it looks like it is moving around(even though its always in one place), while being loop through the sprite sheet. Is there any better tools that I can be using? Also what kind of tips would you give for working with a sprite sheet? Should I create each part of the character in individual layers (left arm, right arm, body, etc.) or everything at once? any other tips would also be helpful! thank you

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • How to categorize textures into atlases

    - by Esa
    I am going to use texture atlasing for the first time in my games, and at first it seemed like a great idea to split textures into atlases by categorizing them by terrain themes e.g ForestTextures, WinterTextures etc. But that could cause a problem when for example a flower has to use transparency shader and other models use a diffuse shader. So those cannot be atlased into the same texture. Thus, would atlasing textures into themes as mentioned before and then splitting them by shader like ForestDiffuse and ForestTransparent be good? Or is there a better way to categorize and build them?

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Camera movement and threshold not working

    - by irish guy mcconagheh
    I have a platformer that is in progress, part of this has a camera which I only want to move when the character moves out of a certain threshold, to try to accomplish this I have the following if statement: if(((Mathf.Abs(target.transform.position.x))-(Mathf.Abs(transform.position.x)))>thres){ x = moveTo(transform.position.x, target.position.x, trackSpeed); } in unity/c#. In pseudocode it means if((absolute value of player x) - (absolute value of camera x) is greater than the threshold){ move { however this does not seem to work correctly. it appears to work for the first couple of times the threshold is reached, however the distance between the camera and the player has to increase every time for the camera to move. I do not believe the movement of the camera is the problem, however the code for it is as follows: private float moveTo(float n, float target, float accel) { if (n == target) { return n; } else { float dir = Mathf.Sign(target - n); n += accel * Time.deltaTime * dir; return (dir == Mathf.Sign(target-n))? n: target; } } }

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • How stoper one annimation model on XNA?

    - by Mehdi Bugnard
    I met a Difficulty for one stoper annimation. Everything works great starter for the animation. But I do not see how stoper and can continue the annimation paused. The "animationPlayer.StartClip (clip)" is used to choke the annimation but impossible to find a way to stoper Thans's a lot Here is my code to use. protected override void LoadContent() { //Model - Player model_player = Content.Load<Model>("Models\\Player\\models"); // Look up our custom skinning information. SkinningData skinningData = model_player.Tag as SkinningData; if (skinningData == null) throw new InvalidOperationException ("This model does not contain a SkinningData tag."); // Create an animation player, and start decoding an animation clip. animationPlayer = new AnimationPlayer(skinningData); AnimationClip clip = skinningData.AnimationClips["ArmLowAction_006"]; animationPlayer.StartClip(clip); } protected overide update(GameTime gameTime) { KeyboardState key = Keyboard.GetState(); // If player don't move -> stop anim if (!key.IsKeyDown(Keys.W) && !keyStateOld.IsKeyUp(Keys.S) && !keyStateOld.IsKeyUp(Keys.A) && !keyStateOld.IsKeyUp(Keys.D)) { //animation stop ? not exist ? animationPlayer.Stop(); isPlayerStop = true; } else { if(isPlayerStop == true) { isPlayerStop = false; animationPlayer.StartClip(Clip); } }

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

  • How can I gain access to a player instance in a Minecraft mod?

    - by Andrew Graber
    I'm creating Minecraft mod with a pickaxe that takes away experience when you break a block. The method for taking away experience from a player is addExperience on EntityPlayer, so I need to get an instance of EntityPlayer for the player using my pickaxe when the pickaxe breaks a block, so that I can remove the appropriate amount of experience. My pickaxe class currently looks like this: public class ExperiencePickaxe extends ItemPickaxe { public ExperiencePickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } public boolean onBlockDestroyed(ItemStack par1ItemStack, World par2World, int par3, int par4, int par5, int par6, EntityLiving par7EntityLiving) { if ((double)Block.blocksList[par3].getBlockHardness(par2World, par4, par5, par6) != 0.0D) { EntityPlayer e = new EntityPlayer(); // create an instance e.addExperience(-1); } return true; } } Obviously, I cannot actually create a new EntityPlayer since it is an abstract class. How can I get access to the player using my pickaxe?

    Read the article

  • XNA Skinned Model - Keyframe.Bone out of range exception

    - by idlackage
    I'm getting an IndexOutOfRangeException on this line of AnimationPlayer.cs: boneTransforms[keyframe.Bone] = keyframe.Transform; I don't get what it's really referring to. The error happens when keyframe.Bone is 14, but I have no idea what that's supposed to mean. The 14th bone of my model? What would that even be? I read this thread, but nothing there seemed to work. I don't have many bones, stray edges/verts, unassigned verts, unparented/non-root bones, or bones with dots in the name. What else can I be missing? Thank you for any help!

    Read the article

  • Multiple Vertex Buffers per Mesh

    - by Daniel
    I've run into the situation where the size of my mesh with all its vertices and indices, is larger than the (optimal) vertex buffer object upper limit (~8MB). I was wondering if I can sub-divide the mesh across multiple vertex buffers, and somehow retain validity of the indices. Ie a triangle with a indice at the first vertex, and an indice at the last (ie in seperate VBOs). All the while maintaining this within Vertex Array Objects. My thoughts are, save myself the hassle, and for meshes (messes :P) such as this, just use the necessary size ( 8MB); which is what I do at the moment. But ideally my buffer manager (wip) at the moment is using optimal sizes; I may just have to make a special case then... Any ideas? If necessary, a simple C++ code example is appreciated. Note: I have also cross-posted this on stackoverflow, as I was not sure as to which it would be more suitable (its partly a design question).

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Slick 2d scrolling off screen

    - by Peter
    I have something scrolling in and out of the screen. Now when it goes off screen, I want it to scroll into the screen at another location. What I do is I grab the last pixels at the screens edge using g.copyArea and then g.drawImage on the edge of the screen. And then I do a g.translate to create room for the next row which is next render cycle. My problem is that I get a single pixel row, which is not copied onto the canvas. Where as I want each row to be added and then translated, so that the image that scrolled off screen is recreated on the other side of the screen. Here is my code, maybe there is a better way of doing this, open to any suggests, cause I'm totally stuck @Override public void render(GameContainer gc, Graphics g) throws SlickException { //g.setClip(0, 0, 300, gc.getHeight()); g.translate(0, y); g.drawImage(image,0,200); g.resetTransform(); //g.clearClip(); g.copyArea(rightImage, 0, gc.getHeight() - 1); g.drawImage(rightImage, 300, 0); g.translate(0, y); y=y+3; }

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • How to make room reflection using Cubemap

    - by MaT
    I am trying to use a cube map of the inside of a room to create some reflections on walls, ceiling and floor. But when I use the cube map, the reflected image is not correct. The point of view seems to be false. To be correct I use a different cube map for each walls, floor or ceiling. The cube map is calculated from the center of the plane looking at the room. Are there specialized techniques to achieve such effect ? Thanks a lot !

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >