Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 511/1042 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

  • HTML5 clicking objects in canvas

    - by Dave
    I have a function in my JS that gets the user's mouse click on the canvas. Now lets say I have a random shape on my canvas (really its a PNG image which is rectangular) but i don't want to include any alpha space. My issue lies with lets say i click some where and it involves a pixel of one of the images. The first issue is how do you work out the pixel location is an object on the map (and not the grass tiles behind). Secondly if i clicked said image, if each image contains its own unique information how do you process the click to load the correct data. Note I don't use libraries I personally prefer the raw method. Relying on libraries doesn't teach me much I find.

    Read the article

  • Channelling an explosion along a narrow passage

    - by finnw
    I am simulating explosions in a 2D maze game. If an explosion occurs in an open area, it covers a circular region (this is the easy bit.) However if an explosion occurs in a narrow passage (i.e narrower than the blast range) then it should be "compressed", so that it goes further and also it should go around corners. Ultimately, unless it is completely boxed-in, then it should cover a constant number of pixels, spreading in whatever direction is necessary to reach this total area. I have tried using a shortest-path algorithm to pick the nearest N pixels to the origin avoiding walls, but the effect is exaggerated - the blast travels around corners too easily, making U-turns even when there is a clear path in another direction. I don't know whether this is realistic or not but it is counter-intuitive to players who assume that they can hide around a corner and the blast will take the path of least resistance in a different direction. Is there a well-known (and fast) algorithm for this?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • TGA loader: reverse height

    - by aVoX
    I wrote a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option "origin" set to "Top Left" (Note: Actually TGA files are meant to be stored upside down - "Bottom Left" in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGAs, so my question is, how do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Best practices on separating Update and Draw on game loop

    - by Galvanize
    I've been working on my first HTML5 prototype and I found a good model that uses the regular Update and Draw loop we see in game dev. My question is, where does one end and the other begins? The question popped when I wanted to rotate and draw an Image, and I kept wondering if the work of changing the tranformation matrix (that I presume would be a bit expensive since it works on the whole pixel array of an image) and calculating the right position do draw it would characterize drawing work, or maybe not, since after that I may need to check for collision or something similar. Thinkig of it, seems like a silly question, but I would like some advice from more experienced developers. Where does does update ends and draw starts? Thanks in advance.

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • keyPressed is not working after adding ActionListener to JButton

    - by Yehonatan
    I have a serious problem while trying to build a menu for my game. I've added two JButton to a main JPanel and added an ActionListener for each of them. The main JPanel also contains the game JPanel which have the keyPressed method inside keyController. That's how it looks - Main -       JPanel -         JButton, JButton,         JPanel which contains the game and keyPressed function inside KeyController class which worked fine before I added the ActionListener for JButton. For some reason after I added an ActionListener for each of the button, the game JPanel is not getting any keyPreseed events nor KeyRealesed. Does anyone know the solution for my situation? Thank you very much! Main window - Scanner in = new Scanner(System.in); JFrame f = new JFrame("Square V.S Circles"); f.setUndecorated(true); f.setResizable(false); f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); f.add(new JPanelHandler()); f.pack(); f.setVisible(true); f.setLocationRelativeTo(null); JPanelHandler(main JPanel) - super.setFocusable(true); JButton mybutton = new JButton("Quit"); JButton sayhi = new JButton("Say hi"); sayhi.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Hi"); } }); mybutton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.exit(0); } }); add(mybutton); add(sayhi); add(new Board(2)); Board KeyController(The code inside is working so it's unnecessary to put it here) - private class KeyController extends KeyAdapter { public KeyController() { ..Code } @Override public void keyPressed(KeyEvent e) { ...Code } @Override public void keyReleased(KeyEvent e){ ...Code } }

    Read the article

  • How can I do fast Triangle/Square vs Triangle collision detection?

    - by Ólafur Waage
    I have a game world where the objects are in a grid based environment with the following restrictions. All of the triangles are 45-90-45 triangles that are unit length. They can only rotate 90°. The squares are of unit length and can not rotate (not that it matters) I have the Square vs Square detection down and it is very very solid and very fast (max vs min on x and y values) Wondering if there are any tricks I can employ since I have these restrictions on the triangles?

    Read the article

  • Collision detection of player larger than clipping tile

    - by user1306322
    I want to know how to check for collisions efficiently in case where the player's box is larger than a map tile. On the left is my usual case where I make 8 checks against every surrounding tile, but with the right one it would be much more inefficient. (picture of two cases: on the left is the simple case, on the right is the one I need help with) http://i.stack.imgur.com/k7q0l.png How should I handle the right case?

    Read the article

  • SpriteBatch.Begin() making my model not render correctly

    - by manning18
    I was trying to output some debug information using DrawString when I noticed my model suddenly was being rendered like it was inside-out (like the culling had been disabled or something) and the texture maps weren't applied I commented out the DrawString method until I only had SpriteBatch.Begin() and .End() and that was enough to cause the model rendering corruption - when I commented those calls out the model rendered correctly What could this be a symptom of? I've stripped it down to the barest of code to isolate the problem and this is what I noticed. Draw code below (as stripped down as possible) GraphicsDevice.Clear(Color.LightGray); foreach (ModelMesh mesh in TIEAdvanced.Meshes) { foreach (Effect effect in mesh.Effects) { if (effect is BasicEffect) ((BasicEffect)effect).EnableDefaultLighting(); effect.CurrentTechnique.Passes[0].Apply(); } } spriteBatch.Begin(); spriteBatch.DrawString(spriteFont, "Camera Position: " + cameraPosition.ToString(), new Vector2(10, 10), Color.Blue); spriteBatch.End(); GraphicsDevice.DepthStencilState = DepthStencilState.Default; TIEAdvanced.Draw(Matrix.CreateScale(0.025f), viewMatrix, projectionMatrix);

    Read the article

  • How to use music in a simple game?

    - by Aerovistae
    It's like this: I've got this very simple game in mind, and I happen to be lucky enough to know this guy at my college who is the best musician I've ever met in person who wasn't already on a stage. He writes these beautiful songs on piano, just meandering and mysterious. They'd add so much as background music. But here's my dilemma: say I record a 5 minute long song from him. How do I use it? Do I set it playing, and then make it start over as soon as it ends? Do I leave a 5 minute period of silence and then start it over again? Or do I find other music and just have continuous music playing? What do other people usually do for this sort of thing?

    Read the article

  • One-way platform collision

    - by TheBroodian
    I hate asking questions that are specific to my own code like this, but I've run into a pesky roadblock and could use some help getting around it. I'm coding floating platforms into my game that will allow a player to jump onto them from underneath, but then will not allow players to fall through them once they are on top, which require some custom collision detection code. The code I have written so far isn't working, the character passes through it on the way up, and on the way down, stops for a moment on the platform, and then falls right through it. Here is the code to handle collisions with floating platforms: protected void HandleFloatingPlatforms(Vector2 moveAmount) { //if character is traveling downward. if (moveAmount.Y > 0) { Rectangle afterMoveRect = collisionRectangle; afterMoveRect.Offset((int)moveAmount.X, (int)moveAmount.Y); foreach (World_Objects.GameObject platform in gameplayScreen.Entities) { if (platform is World_Objects.Inanimate_Objects.FloatingPlatform) { //wideProximityArea is just a rectangle surrounding the collision //box of an entity to check for nearby entities. if (wideProximityArea.Intersects(platform.CollisionRectangle) || wideProximityArea.Contains(platform.CollisionRectangle)) { if (afterMoveRect.Intersects(platform.CollisionRectangle)) { //This, in my mind would denote that after the character is moved, its feet have fallen below the top of the platform, but before he had moved its feet were above it... if (collisionRectangle.Bottom <= platform.CollisionRectangle.Top) { if (afterMoveRect.Bottom > platform.CollisionRectangle.Top) { //And then after detecting that he has fallen through the platform, reposition him on top of it... worldLocation.Y = platform.CollisionRectangle.Y - frameHeight; hasCollidedVertically = true; } } } } } } } } In case you are curious, the parameter moveAmount is found through this code: elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; float totalX = 0; float totalY = 0; foreach (Vector2 vector in velocities) { totalX += vector.X; totalY += vector.Y; } velocities.Clear(); velocity.X = totalX; velocity.Y = totalY; velocity.Y = Math.Min(velocity.Y, 1000); Vector2 moveAmount = velocity * elapsed;

    Read the article

  • How to check battery usage of an iPhone/Android app?

    - by Gajoo
    I think the title says Enough. For example Unity can generate you a report how much CPU/GPU power it's using or how fast it's going to drain device battery, but what about the applications developed using Cocos2d or the ones you develop directly using OpenGL? How should you profile them? In general what should you profile? or Should I simply run the application and wait for it's battery to run out?

    Read the article

  • Possible / How to render to multiple back buffers, using one as a shader resource when rendering to the other, and vice versa?

    - by Raptormeat
    I'm making a game in Direct3D10. For several of my rendering passes, I need to change the behavior of the pass depending on what is already rendered on the back buffer. (For example, I'd like to do some custom blending- when the destination color is dark, do one thing; when it is light, do another). It looks like I'll need to create multiple render targets and render back and forth between them. What's the best way to do this? Create my own render textures, use them, and then copy the final result into the back buffer. Create multiple back buffers, render between them, and then present the last one that was rendered to. Create one render texture, and one back buffer, render between them, and just ensure that the back buffer is the final target rendered to I'm not sure which of these is possible, and if there are any performance issues that aren't obvious. Clearly my preference would be to have 2 rather than 3 default render targets, if possible.

    Read the article

  • How to do pixel per pixel modeling in unity3d?

    - by Kabumbus
    So generally I want to have api like pixels.addPixel3D(new Pixel3D(0xFF0000, 100, 100,100)); (color, position) where pixels is some abstraction on 3d sceen objet.So to say point cloud. It would have grate use in deep space/stars modeling... I want to set each pixel by hand (having no image base or any automatic thing)... So point is modeling something like Or look at alive flash analog here How to do such thing in unity?

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Pre baked fractures and explosion : I need an answer for C++

    - by Ken
    What are the prebaked or precomputed explosions or fractures from a programmer viewpoint ? I would like to know how to achieve this in C++ and how this things are usually considered (they are animations? textures?), it would be perfect if there will be some examples available or someone that can picture a broad view about this. I need to add a really small support for this in my code and i need an hint about how to start, i would like to do this on my own without other libraries.

    Read the article

  • How do history generation algorithms work?

    - by Bane
    I heard of the game Dwarf Fortress, but only now one of the people I follow on Youtube made a commentary on it... I was more than surprised when I noticed how Dwarf Fortress actually generates a history for the world! Now, how do these algorithms work? What do they usually take as input, except the length of the simulation? How specific can they be? And more importantly; can they be made in Javascript, or is Javascript too slow? (I guess this depends on the depth of the simulation, but take Dwarf Fortress as an example.)

    Read the article

  • viewing fbx files in windows via xna 4.0

    - by user17753
    I've made some models in Blender and exported them in Autodesk fbx format. I'm trying to view them using XNA 4.0 Refresh. Loading them isn't much an issue, but I'm not familiar enough with XNA 4.0 to, well basically I want to load in the model at say the origin (0,0,0) world coordinates, and then rotate and/or zoom the camera about the world coordinates origin as well so that I can test the model. Typically the mouse, and maybe some arrow keys for zooming/rotating the camera. Anyways, this seems like a simple task and I shouldn't have to re-invent this, isn't there a skeleton code somewhere for this kind of thing for XNA 4.0? I couldn't find a solid example for this on the web. I found a couple that seemed like they might work for xbox, but I'm trying to do this on windows only. Anyways, just looking to be pointed in the right direction on this one, thanks.

    Read the article

  • Architecture for html5 multiplayer game?

    - by Tihomir Iliev
    Hello I want to write a HTML5 multiplayer game in which there are rooms with two players answering a series of questions with 3 possible answers, 10sec/question, which are being downloaded from a server. It will have some ratings and so on. I want to make it as scalable as possible. I wonder what technologies to use to accomplish that. HTML5, CSS3 and JavaScript obviously. But what about the server-side? I have been researching and found that Socket.IO + Node.js + mongoDB would do the job but after doing some more research it maybe not. Can you suggest me some kind of architecture for doing this game? Free technologies, if possible. Or what to read and from where to start in order to understand how to do it. Thanx in advance! P.S. I have an experience with HTML5, CSS3, JavaScript, C#, ASP.NET MVC and relational db's.

    Read the article

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >