Search Results

Search found 21194 results on 848 pages for 'game state'.

Page 320/848 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Why I'm getting the same result when deleting target?

    - by XNA
    In the following code we use target in the function: moon.mouseEnabled = false; sky0.addChild(moon); addEventListener(MouseEvent.MOUSE_DOWN, onDrag, false, 0, true); addEventListener(MouseEvent.MOUSE_UP, onDrop, false, 0, true); function onDrag(evt:MouseEvent):void { evt.target.addChild(moon); evt.target.startDrag(); } function onDrop(evt:MouseEvent):void { stopDrag(); } But if I rewrite this code without evt.target it still work. So what is the difference, am I going to get errors later in the run time because I didn't put target? If not then why some use target a lot while it works without it. function onDrag(evt:MouseEvent):void { addChild(moon); startDrag(); }

    Read the article

  • Limit the amount a camera can pitch

    - by ChocoMan
    I'm having problems trying to limit the range my camera can pitch. Currently my camera can pitch around a model without restriction, but having a hard time trying to find the value of the degree/radian the camera is currently at after pitching. Here is what I got so far: // Moves camera with thumbstick Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); // Pitch Camera around model public void cameraPitch(float pitch) { pitchAngle = ModelLoad.camTarget - ModelLoad.CameraPos; axisPitch = Vector3.Cross(Vector3.Up, pitchAngle); // pitch constrained to model's orientation axisPitch.Normalize(); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } I've tried restraining the Y-camera position of ModelLoad.CameraPos.Y, but doing so gave me some unwanted results.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Stack Overflow Error

    - by dylanisawesome1
    I recently created a recursive cave algorithm, and would like to have more extensive caves, but get a stack overflow after re-cursing a couple times. Any advice? Here's my code: for(int i=0;i<100;i++) { int rand = new Random().nextInt(100); if(rand<=20) { if(curtile.bounds.y-40>500+new Random().nextInt(20)) digDirection(Direction.UP); } if(rand<=40 && rand>20) { if(curtile.bounds.y+40<m.height) digDirection(Direction.DOWN); } if(rand<=60 && rand>40) { if(curtile.bounds.x-40>0) digDirection(Direction.LEFT); } if(rand<=80 && rand>60) { if(curtile.bounds.x+40<m.width) digDirection(Direction.RIGHT); } } } public void digDirection(Direction d) { if(new Random().nextInt(100)<=10) { new Miner(curtile, map); // try { // Thread.sleep(2); // } catch (InterruptedException e) { // // TODO Auto-generated catch block // e.printStackTrace(); // } //Tried this to avoid stack overflow. Didn't work. }

    Read the article

  • Doing an SNES Mode 7 (affine transform) effect in pygame

    - by 2D_Guy
    Is there such a thing as a short answer on how to do a Mode 7 / mario kart type effect in pygame? I have googled extensively, all the docs I can come up with are dozens of pages in other languages (asm, c) with lots of strange-looking equations and such. Ideally, I would like to find something explained more in English than in mathematical terms. I can use PIL or pygame to manipulate the image/texture, or whatever else is necessary. I would really like to achieve a mode 7 effect in pygame, but I seem close to my wit's end. Help would be greatly appreciated. Any and all resources or explanations you can provide would be fantastic, even if they're not as simple as I'd like them to be. If I can figure it out, I'll write a definitive how to do mode 7 for newbies page. edit: mode 7 doc: http://www.coranac.com/tonc/text/mode7.htm

    Read the article

  • libgdx rotation (animation, arrays) issues and help needed

    - by johnny-b
    well i am a noob at java and libgdx. i got the homing bullet working with the help of someone. now i am smashing my head as to how i can make it rotate so it faces the ball (which is the main character) when it goes around it or when it is coming towards it. the bullet is facing <--- and the code below is what i have done so far. also i used sprites for the bullet and also animation method. Also how do i make it an array/arraylist which is best so i can have multiple bullets at random or placed places. i tried many things nothing workd :( thank you for the help. // below is the bullet or enemy if you want to call it. public class Bullet extends Sprite { public static final float BULLET_HOMING = 6000; public static final float BULLET_SPEED = 300; private Vector2 velocity; private float lifetime; public Bullet(float x, float y) { velocity = new Vector2(0, 0); setPosition(x, y); } public void update(float delta) { float targetX = GameWorld.getBall().getX(); float targetY = GameWorld.getBall().getY(); float dx = targetX - getX(); float dy = targetY - getY(); float distToTarget = (float) Math.sqrt(dx * dx + dy * dy); dx /= distToTarget; dy /= distToTarget; dx *= BULLET_HOMING; dy *= BULLET_HOMING; velocity.x += dx * delta; velocity.y += dy * delta; float vMag = (float) Math.sqrt(velocity.x * velocity.x + velocity.y * velocity.y); velocity.x /= vMag; velocity.y /= vMag; velocity.x *= BULLET_SPEED; velocity.y *= BULLET_SPEED; Vector2 v = velocity.cpy().scl(delta); setPosition(getX() + v.x, getY() + v.y); setOriginCenter(); setRotation(velocity.angle()); lifetime += delta; setRegion(AssetLoader.bulletAnimation.getKeyFrame(lifetime)); } } // this is where i load the images. public class AssetLoader { public static Animation bulletAnimation; public static Sprite bullet1, bullet2; public static void load() { texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, aims); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); } public static void dispose() { // We must dispose of the texture when we are finished. texture.dispose(); } // this is for the rendering of the images etc public class GameRenderer { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); // The ball needs transparency, so we enable that again. batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); // End SpriteBatch batcher.end(); } } // this is to load the image etc on the screen i guess public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } so there is the whole thing. the images are loaded via the AssetLoader then to the GameRenderer and GameWorld via the Bullet class. i am guessing that is how it is. sorry newbie so still learning. thank you in advace for the help or any advice.

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Collision detection of convex shapes on voxel terrain

    - by Dave
    I have some standard convex shapes (cubes, capsules) on a voxel terrain. It is very easy to detect single vertex collisions. However, it becomes computationally expensive when many vertices are involved. To clarify, currently my algorithm represents a cube as multiple vertices covering every face of the cube, not just the corners. This is because the cubes can be much bigger than the voxels, so multiple sample points (vertices) are required (the distance between sample points must be at least the width of a voxel). This very rapidly becomes intractable. It would be great if there were some standard algorithm(s) for collision detection between convex shapes and arbitrary voxel based terrain (like there is with OBB's and seperating axis theorem etc). Any help much appreciated.

    Read the article

  • Unity3d: calculate the result of a transform without modifying transform object itself

    - by Heisenbug
    I'm in the following situation: I need to move an object in some way, basically rotating it around its parent local position, or translating it in its parent local space (I know how to do this). The amount of rotation and translation is know at runtime (it depends on several factors, the speed of the object, enviroment factors, etc..). The problem is the following: I can perform this transformation only if the result position of the transformed object fit some criterias. An example could be this: the distance between the position before and after the transformation must be less than a given threshold. (Actually the conditions could be several and more complex) The problem is that if I use Transform.Rotate and Transform.Translate methods of my GameObject, I will loose the original Transform values. I think I can't copy the original Transform using instantiate for performance issues. How can I perform such a task? I think I have more or less 2 possibilities: First Don't modify the GameObject position through Transform. Calculate which will be the position after the transform. If the position is legal, modify transform through Translate and Rotate methods Second Store the original transform someway. Transform the object using Translate and Rotate. If the transformed position is illegal, restore the original one.

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • Problem using glm::lookat

    - by omikun
    I am trying to rotate a sprite so it is always facing a 3D camera. Object GLfloat vertexData[] = { // X Y Z U V 0.0f, 0.8f, 0.0f, 0.5f, 1.0f, -0.8f,-0.8f, 0.0f, 0.0f, 0.0f, 0.8f,-0.8f, 0.0f, 1.0f, 0.0f, }; Per frame transform glm::mat4 newTransform = glm::lookAt(glm::vec3(0), gCamera.position(), gCamera.up()); shaders->setUniform("camera", gCamera.matrix()); shaders->setUniform("model", newTransform); In the vertex shader: gl_Position = camera * model * vec4(vert, 1); The object will track the camera if I move the camera up or down, but if I move the camera left/right (spin the camera around the object's y axis), it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. If I use -gCamera.up() instead, it would track the camera side to side, but spin the opposite direction when I move the camera up/down. What am I doing wrong?

    Read the article

  • How to determine character's foot contact point on a uniform triangle mesh terrain?

    - by xenon
    For a terrain that is modelled by a heightmap with a uniform triangle mesh, what are some techniques I could use to determine the contact point of the foot of a character standing on the terrain? Since the terrain's Y values are altered by the heightmap, they won't be flat any more. As the character moves on the terrain, it has to know at which values of Y-value its foot should be. Conceptually, what are some methods and techniques to determine the contact point of the character's foot standing on the terrain?

    Read the article

  • multipass shadow mapping renderer in XNA

    - by Nick
    I am wanting to implement a multipass renderer in XNA (additive blending combines the contributions from each light). I have the renderer working without any shadows, but when I try to add shadow mapping support I run into an issue with switching render targets to draw the shadow maps. When I switch render targets, I lose the contents of the backbuffer which ruins the whole additive blending idea. For example: Draw() { DrawAmbientLighting() foreach (DirectionalLight) { DrawDirectionalShadowMap() // <-- I lose all previous lighting contributions when I switch to the shadow map render target here DrawDirectionalLighting() } } Is there any way around my issue? (I could render all the shadow maps first, but then I have to make and hold onto a render target for each light that casts a shadow--is this the only way?)

    Read the article

  • Slick2D: Animation not being parsed from spritesheet correctly

    - by user2066880
    I have a 960x960 spritesheet with each tile being 192x192. I initialized my spritesheet and animation like so: spritesheet = new SpriteSheet("resources/spritesheets/player.png", 192, 192); walkingLeft = new Animation(spritesheet, 3, 0, 0, 1, true, 20, true); When I attempt to render the animation, I get a java.lang.ArrayIndexOutOfBoundsException: -1 error. This error doesn't occur when I'm creating an animation from images in the same row. Therefore, I'm assuming that the error is being caused because of the way Slick is handling horizontal scanning (going to the next row after reaching the end).

    Read the article

  • Multiple audio sources on a single gameObject in unity

    - by angryInsomniac
    So, I have an audio system set up wherein I have loaded all my audio clips centrally and play them on demand by passing the requesting audioSource into the sound manager. However, there is a complication wherein if I want to overlay multiple looping sounds, I need to have multiple audio sources on an object, which is fine , so I created two in my script instantiated them and played my clips on them and then the world went crazy. For some reason, when I create two audio Sources in an object only the latest one is ever used, even if I explicitly keep objects separated, playing a clip on one or the other plays the clip on the last one that was created, furthermore, either this last one is not created in the right place or somehow messes with the rolloff rules because I can hear it all across my level, havign just one source works fine, but putting a second one on it causes shit to go batshit insane. Does anyone know the reason / solution for this ? Some pseudocode : guardSoundsSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardSoundsSource.name = "Guard_Sounds_source"; // Setup this source guardThrusterSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardThrusterSource.name = "Guard_Thruster_Source"; // setup this source // play using custom Sound manager soundMan.soundMgr.playOnSource(guardSoundsSource,"Guard_Idle_loop" ,true,GameManager.Manager.PlayerType); // this method prints out the name of the source the sound was to be played on and it always shows "Guard_Thruster_Source" even on the "Guard_Idle_loop" even though I clearly told it to use "Guard_Sounds_source"

    Read the article

  • Is there an easy and automatic way of converting a Windows XNA project into a Monotouch Monogame project?

    - by Krumelur
    I have just started with XNA development on Windows. But as I'm a fan of iOS I had to try porting my test code over to Monotouch on the Mac. I used these instructions: http://www.facepuncher.com/blogs/10parameters/?p=42 But this is so much (manual) work! And it really doesn't answer open topics like: why would I copy all the XNB files and in addition all the resources, like PNGs? Is there maybe a tool that automatically converts a Windows XNA project into a Monotouch iOS project or at least creates the correct folder structure?

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • Matrix multiplication - Scene Graphs

    - by bgarate
    I wrote a MatrixStack class in C# to use in a SceneGraph. So, to get the world matrix for an object I am suposed to use: WorldMatrix = ParentWorld * LocalTransform But, in fact, it only works as expected when I do the other way: WorldMatrix = LocalTransform * ParentWorld Mi code is: public class MatrixStack { Stack<Matrix> stack = new Stack<Matrix>(); Matrix result = Matrix.Identity; public void PushMatrix(Matrix matrix) { stack.Push(matrix); result = matrix * result; } public Matrix PopMatrix() { result = Matrix.Invert(stack.Peek()) * result; return stack.Pop(); } public Matrix Result { get { return result; } } public void Clear() { stack.Clear(); result = Matrix.Identity; } } Why it works this way and not the other? Thanks!

    Read the article

  • Rotate a particle system

    - by Blueski
    Languages / Libraries in use: C++, OpenGL, GLUT Okay, here's the deal. I've got a particle system which shoots out alpha blended textures to produce a flame. The system only keeps track of very basic things such as, time alive, life, xyz and spread. The direction in which the flames are currently moving in is purely based on other things which are going on in my code ( I assume ). My goal however, is to attach the flame to the camera (DONE) and have the flame pointing in the direction my camera is facing (NOT WORKING). I've tried glRotate for both x,y,z and I can't get it to work properly. I'm currently using gluLookAt to move the camera, and get the flame to follow the XYZ of the camera by calling glTranslatef(camX, camY - offset, camZ); Any suggestions on how I can rotate the direction of the flame with the camera would be greatly appreciated. Heres an image of what I've got: http://i.imgur.com/YhV4w.png Notes: Crosshair depicts where camera is facing if I turn the camera, flame doesn't follow the crosshair Also asked here: http://stackoverflow.com/questions/9560396/rotate-a-particle-system but was referred here

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >