Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 533/1071 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Realistic Jumping

    - by Seth Taddiken
    I want to make the jumping that my character does more realistic. This is what I've tried so far but it doesn't seem very realistic when the player jumps. I want it to jump up at a certain speed then slow down as it gets to the top then eventually stopping (for about one frame) and then slowly going back down but going faster and faster as it goes back down. I've been trying to make the speed at which the player jumps up slow down by one each frame then become negative and go down faster... but it doesn't work very well public bool isPlayerDown = true; public bool maxJumpLimit = false; public bool gravityReality = false; public bool leftWall = false; public bool rightWall = false; public float x = 76f; public float y = 405f; if (Keyboard.GetState().IsKeyDown(up) && this.isPlayerDown == true && this.y <= 405f) { this.isPlayerDown = false; } if (this.isPlayerDown == false && this.maxJumpLimit == false) { this.y = this.y - 6; } if (this.y <= 200) { this.maxJumpLimit = true; } if (this.isPlayerDown == true) { this.y = 405f; this.isPlayerDown = true; this.maxJumpLimit = false; } if (this.gravityReality == true) { this.y = this.y + 2f; this.gravityReality = false; } if (this.maxJumpLimit == true) { this.y = this.y + 2f; this.gravityReality = true; } if (this.y > 405f) { this.isPlayerDown = true; }

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • How do I optimize searching for the nearest point?

    - by Rootosaurus
    For a little project of mine I'm trying to implement a space colonization algorithm in order to grow trees. The current implementation of this algorithm works fine. But I have to optimize the whole thing in order to make it generate faster. I work with 1 to 300K of random attraction points to generate one tree, and it takes a lot of time to compute and compare distances between attraction points and tree node in order to keep only the closest treenode for an attraction point. So I was wondering if some solutions exist (I know they must exist) in order to avoid the time loss looping on each tree node for each attraction point to find the closest... and so on until the tree is finished.

    Read the article

  • specifying an object type at runtime

    - by lapin
    I've written a Vbo template class to work with opengl. I'd like to set the type from a config file at runtime. e.g. <vbo type="bump_vt" ... /> Vbo* pVbo = new Vbo(bump_vt, ...); Is there some way I can do this without a large if else block e.g. if( sType.compareTo("bump_vt") == 0 ) Vbo* pVbo = new Vbo(bump_vt, ...); else if ... I'm writing for multiple platforms in c++. thanks

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Where can I find resources for RPG Character Sprites? [closed]

    - by IcySnow
    I'm developing a turn-based 2D RPG game. Everything's going fine except the lack of characters' sprites such as moving, attacking animation, etc.... By characters I mean both human-like and monster-like creatures. Is there a website providing sprites for free? Or a program (free or paid, whichever is fine) which will let me create sprites from scratch and automatically generate the images? I tried my best to search for one but the best I've managed to find so far is http://spriters-resource.com/. However, is there something else similar and better out there?

    Read the article

  • Prevent collisions between mobs/npcs/units piloted by computer AI : How to avoid mobile obstacles?

    - by Arthur Wulf White
    Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2: Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3:Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from "walking on top of one another" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse.

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

  • libGDX using Stage and Actor produces different camera angles on desktop and Android Phone

    - by Brandon
    libGDX using Stage and Actor produces different camera angles on desktop and Android Phone. Here are pictures demonstrating the problem: http://brandonyuh.minus.com/mFpdTSgN17VUq On the desktop version, the image takes up most all the screen. On the Android phone it only takes up a bit of the screen. Here's the code (not my actual project but I isolated the problem): package com.me.mygdxgame2; import com.badlogic.gdx.*; import com.badlogic.gdx.graphics.*; import com.badlogic.gdx.graphics.Texture.TextureFilter; import com.badlogic.gdx.graphics.g2d.*; import com.badlogic.gdx.scenes.scene2d.*; public class MyGdxGame2 implements ApplicationListener { private Stage stage; public void create() { stage = new Stage(); stage.addActor(new ActorHi()); } public void render() { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); stage.draw(); } public void dispose() {} public void resize(int width, int height) {} public void pause() {} public void resume() {} public class ActorHi extends Actor { private Sprite sprite; public ActorHi() { Texture texture = new Texture(Gdx.files.internal("data/hi.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); sprite = new Sprite(new TextureRegion(texture, 0, 0, 128, 128)); sprite.setBounds(0, 0, 300.0f, 300.0f); } public void draw(SpriteBatch batch, float parentAlpha) { sprite.draw(batch); } } } hi.png is included in the above link Thank you very much for answering my question. I've spent 3 days trying to figure it out.

    Read the article

  • Inputting cheat codes - hidden keyboard input

    - by Fibericon
    Okay, here's what I want to do - when the player is at the main menu, I want them to be able to type in cheat codes. That's the only place I want it to work. I don't want to give them a text box to type into. Rather, I want them to simply type in a word (let's say "cheat", just for simplicity sake) that activates the cheat code. I only need to capture keyboard input when the window is in focus. What can I do to accomplish this?

    Read the article

  • LWJGL GL_QUADS texture artifact

    - by Dajgoro Labinac
    I managed to get working lwjgl in Java, and i loaded a test image(tv test card), but i keep getting weird artifacts outside the image. Image link: http://tinypic.com/r/vhv9g/6 Code: glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex2i(10, 10); glTexCoord2f(1, 0); glVertex2i(500, 10); glTexCoord2f(1, 1); glVertex2i(500, 500); glTexCoord2f(0, 1); glVertex2i(10, 500); glEnd(); What could be the cause?

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Problem with Update(GameTime) Methods and Pause implementation

    - by Adam
    I have the pause function implemented and it works correctly in that it dims the player screen and stops updating the gameplay. The problem is that GameTime continues to increase while it is paused, so my method that checks gameTime versus previousSpawnTime before spawning another enemy gets messed up and if the game is paused too long it is noticeable that the next enemy draws far too early. Here is my code for the enemy update. private void UpdateEnemies(GameTime gameTime) { // Spawn a new enemy every 1.5 seconds if (gameTime.TotalGameTime - previousSpawnTime > enemySpawnTime) { previousSpawnTime = gameTime.TotalGameTime; // Add an Enemy AddEnemy(); } ... I also have other methods that depend on gameTime. I've tried getting the total pause time and subtracting that from the total game time, but I can't seem to get it to work correctly if that is the way I should go about solving this. If you need to see any other code let me know. Thank you.

    Read the article

  • XNA 4: RenderTarget2D textures getting transparent on fullscreen

    - by Shashwat
    I'm generating a Texture2D object using RenderTarget2D as in the following code public static Texture2D GetTextTexture(string text, Vector2 position, SpriteFont font, Color foreColor, Color backColor, Texture2D background=null) { int width = (int)font.MeasureString(text).X; int height = (int)font.MeasureString(text).Y; GraphicsDevice device = Settings.game.GraphicsDevice; SpriteBatch spriteBatch = Settings.game.spriteBatch; RenderTarget2D renderTarget = new RenderTarget2D(device, width, height, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, device.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents); device.SetRenderTarget(renderTarget); device.Clear(backColor); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque); if (background != null) spriteBatch.Draw(background, new Rectangle(0, 0, 70, 70), Color.White); spriteBatch.End(); spriteBatch.Begin(); spriteBatch.DrawString(font, text, position, foreColor, 0, new Vector2(0), 0.8f, SpriteEffects.None, 0); spriteBatch.End(); device.SetRenderTarget(null); ResetGraphicsDeviceSettings(); return (Texture2D)renderTarget; } It's working all fine. But when I ToggleFullScreen() (and vice-versa), the previous textures are getting transparent. However, the new textures after that are being generated correctly. What can be the reason for this?

    Read the article

  • Html5 games, what is the standard dimension to use?

    - by aoi
    I am trying to make html5 games to be played on the browser(not offline apps), and I am trying to support the maximum number of platforms, hence I need to know what dimension should I use for the game canvas so that it works in the most number of places. Also is there anyway to "scale" a large game to fit in the tiny size of iphone(around 320x356px I think). By "scale" I don't mean to actually resize just the canvas, as because that can mess up the coordinate based calculations, and for a large number of objects, re-positioning based on canvas size can be a real hassle.

    Read the article

  • Generating and rendering not point-like particles on GPU

    - by TravisG
    Specifically I'm talking about particles as seen (for example) in the UE4 dev video here. They're not just points and seem to have a nice shape to them that seems to follow their movement. Is it possible to create these kinds of particles (efficiently) completely on the GPU (perhaps through something like motion? Or is the only (or most efficient) way to just create a small particle texture and render small quads for each particle?

    Read the article

  • What library should I use for 2D Geometry? [closed]

    - by Luka
    I've been working on a 2D game in java, but found that java just didn't cut it for me and had forced me to a lot of bad design choices, so I've decided to port all my work to c++. The main reason I've decided change to c++ is that i had reached a point where i had 3 geometry libraries (the native, one from the game engine and one to handle "complex" polygons), none of witch worked very well together and i couldn't keep track of them. I'm new to c++, but i know all the basics. My question is, what would be a good geometry library to use, ideally it should be able to handle integer and decimal data types, have point, line, and polygon classes witch are able to check for intersection and contains. Thanks in advance, Luka

    Read the article

  • Character equipment combinations

    - by JimFing
    I'm developing a 2d isometric game (typical Tolkien RPG) and wondering how to handle character/equipment combinations. So for example, the player wears leather boots with chain-mail and a wooden shield and a sword - but then picks up plate-armour instead of chain-mail. I'm using Blender3D to create objects, environments and characters in 3D, then a script runs to render all 3D meshes into 2D orthographic tile maps. So I can use this script to create all the combinations of character equipment for me, but there would be an explosion in terms of the combinations required.

    Read the article

  • HTML5 - check if font has loaded

    - by espais
    At present I load my font for my game in with @font-face For instance: @font-face { font-family: 'Orbitron'; src: url('res/orbitron-medium.ttf'); } and then reference it throughout my JS implementation as such: ctx.font = "12pt Orbitron"; where ctx is my 2d context from the canvas. However, I notice a certain lag time while the font is downloaded to the user. Is there a way I can use a default font until it is loaded in? Edit - I'll expand the question, because I hadn't taken the first comment into account. What would the proper method of handling this be in the case that a user has disabled custom fonts?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • Artifacts when draw particles with some alpha

    - by piotrek
    I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); My fragment shader for particle is very simple: precision highp float; precision highp int; uniform sampler2D u_maps[2]; varying vec2 v_texture; uniform float opaque; uniform vec3 colorize; void main() { vec4 texColor = texture2D(u_maps[0], v_texture); gl_FragColor.rgb = texColor.rgb * colorize.rgb; gl_FragColor.a = texColor.a * opaque; } I attach screenshot from this: Do you know what I made wrong ? I use OpenGL ES 2.0.

    Read the article

  • Lag compensation with networked 2D games

    - by Milo
    I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation. What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue. Basically how does this work for shooting and stuff like that? Thanks

    Read the article

  • Most efficient way to generate 2D portraits

    - by user1221
    Hey, I am not sure if this is a fitting question for gamedev, or if it is too art related. I am currently trying, to create 2D character protraits for my game. At first I tried to draw them and even though it helped polishing my drawing skills the end result either required way too much time or it simply looked like it was created by a grade school kid. So I am currently looking into some tools which from which people like me who are not out of the art-world might benefit. Especially tools which can create a 3D head+hair, so that I can render them. I have tried several 3D generation tools such as makehead and makehuman to create the basic head-shape. But I have to admit I am not well versed in what other options are available/what has the best quality/etc.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >