Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 443/1332 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • how to do partial updates in OpenGL?

    - by Will
    It is general wisdom that you redraw the entire viewport on each frame. I would like to use partial updates; what are the various ways can do that, and what are their pros, cons and relative performance? (Using textures, FBOs, the accumulator buffer, any kind of scissors that can affect swapbuffers etc?) A scenario: a scene with a fair few thousand visible trees; although the textures are mipmapped and they are drawn via VBOs roughly front-to-back with so on, its still a lot of polys. Would streaming a single screen-sized texture be better than throwing them at the screen every frame? You'd have to redraw and recapture them only on camera movement or as often as your wind model updates or whatever, which need not be every frame.

    Read the article

  • Navigating Libgdx Menu with arrow keys or controller

    - by Phil Royer
    I'm attempting to make my menu navigable with the arrow keys or via the d-pad on a controller. So Far I've had no luck. The question is: Can someone walk me through how to make my current menu or any libgdx menu keyboard accessible? I'm a bit noobish with some stuff and I come from a Javascript background. Here's an example of what I'm trying to do: http://dl.dropboxusercontent.com/u/39448/webgl/qb/qb.html For a simple menu that you can just add a few buttons to and it run out of the box use this: http://www.sadafnoor.com/blog/how-to-create-simple-menu-in-libgdx/ Or you can use my code but I use a lot of custom styles. And here's an example of my code: import aurelienribon.tweenengine.Timeline; import aurelienribon.tweenengine.Tween; import aurelienribon.tweenengine.TweenManager; import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Screen; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureAtlas; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.scenes.scene2d.Actor; import com.badlogic.gdx.scenes.scene2d.InputEvent; import com.badlogic.gdx.scenes.scene2d.InputListener; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.Table; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; import com.badlogic.gdx.scenes.scene2d.utils.Align; import com.badlogic.gdx.scenes.scene2d.utils.ClickListener; import com.project.game.tween.ActorAccessor; public class MainMenu implements Screen { private SpriteBatch batch; private Sprite menuBG; private Stage stage; private TextureAtlas atlas; private Skin skin; private Table table; private TweenManager tweenManager; @Override public void render(float delta) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); menuBG.draw(batch); batch.end(); //table.debug(); stage.act(delta); stage.draw(); //Table.drawDebug(stage); tweenManager.update(delta); } @Override public void resize(int width, int height) { menuBG.setSize(width, height); stage.setViewport(width, height, false); table.invalidateHierarchy(); } @Override public void resume() { } @Override public void show() { stage = new Stage(); Gdx.input.setInputProcessor(stage); batch = new SpriteBatch(); atlas = new TextureAtlas("ui/atlas.pack"); skin = new Skin(Gdx.files.internal("ui/menuSkin.json"), atlas); table = new Table(skin); table.setBounds(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Set Background Texture menuBackgroundTexture = new Texture("images/mainMenuBackground.png"); menuBG = new Sprite(menuBackgroundTexture); menuBG.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Create Main Menu Buttons // Button Play TextButton buttonPlay = new TextButton("START", skin, "inactive"); buttonPlay.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new LevelMenu()); } }); buttonPlay.addListener(new InputListener() { public boolean keyDown (InputEvent event, int keycode) { System.out.println("down"); return true; } }); buttonPlay.padBottom(12); buttonPlay.padLeft(20); buttonPlay.getLabel().setAlignment(Align.left); // Button EXTRAS TextButton buttonExtras = new TextButton("EXTRAS", skin, "inactive"); buttonExtras.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new ExtrasMenu()); } }); buttonExtras.padBottom(12); buttonExtras.padLeft(20); buttonExtras.getLabel().setAlignment(Align.left); // Button Credits TextButton buttonCredits = new TextButton("CREDITS", skin, "inactive"); buttonCredits.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Credits()); } }); buttonCredits.padBottom(12); buttonCredits.padLeft(20); buttonCredits.getLabel().setAlignment(Align.left); // Button Settings TextButton buttonSettings = new TextButton("SETTINGS", skin, "inactive"); buttonSettings.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Settings()); } }); buttonSettings.padBottom(12); buttonSettings.padLeft(20); buttonSettings.getLabel().setAlignment(Align.left); // Button Exit TextButton buttonExit = new TextButton("EXIT", skin, "inactive"); buttonExit.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { Gdx.app.exit(); } }); buttonExit.padBottom(12); buttonExit.padLeft(20); buttonExit.getLabel().setAlignment(Align.left); // Adding Heading-Buttons to the cue table.add().width(190); table.add().width((table.getWidth() / 10) * 3); table.add().width((table.getWidth() / 10) * 5).height(140).spaceBottom(50); table.add().width(190).row(); table.add().width(190); table.add(buttonPlay).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExtras).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonCredits).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonSettings).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExit).width(460).height(110); table.add().row(); stage.addActor(table); // Animation Settings tweenManager = new TweenManager(); Tween.registerAccessor(Actor.class, new ActorAccessor()); // Heading and Buttons Fade In Timeline.createSequence().beginSequence() .push(Tween.set(buttonPlay, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExtras, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonCredits, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonSettings, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExit, ActorAccessor.ALPHA).target(0)) .push(Tween.to(buttonPlay, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExtras, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonCredits, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonSettings, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExit, ActorAccessor.ALPHA, .5f).target(1)) .end().start(tweenManager); tweenManager.update(Gdx.graphics.getDeltaTime()); } public static Vector2 getStageLocation(Actor actor) { return actor.localToStageCoordinates(new Vector2(0, 0)); } @Override public void dispose() { stage.dispose(); atlas.dispose(); skin.dispose(); menuBG.getTexture().dispose(); } @Override public void hide() { dispose(); } @Override public void pause() { } }

    Read the article

  • running GL ES 2.0 code under Linux ( no Android no iOS )

    - by user827992
    I need to code OpenGL ES 2.0 bits and i would like to do this and run the programs on my desktop for practical reasons. Now, i already have tried the official GLES SDK from ATI for my videocard but it not even runs the examples that comes with the SDK itself, i'm not looking for performance here, even a software based rendering pipeline could be enough, i just need full support for GLES 2.0 and GLSL to code and run GL stuff. There is a reliable solution for this under Ubuntu Linux ?

    Read the article

  • How are buttons made to be clicked?

    - by Johnny
    I just want to ask a general question. According to that answer, Ill continue thinking. You know in games there are lots of clickable items. Play button, exit, comboboxes maybe etc. My question is are those buttons drawn in same canvas with background and all other things, or for every different thing there is another canvas object? My question is about for general. Im not asking about a specific game, im asking how they are made generally. Im planning to start a game on Android, and Im confused actually how to design buttons, and other object. Probably Im going to use View/SurfaceView for now. I don't have much experience with OpenGL yet. Thanks in advance.

    Read the article

  • Raycasting mouse coordinates to rotated object?

    - by SPL
    I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated -100 on the Y and rotated 60 on the X then translated again -100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.

    Read the article

  • Make Pong on android using OpenGL-ES

    - by brainydexter
    I am trying to make a simple pong game using opengl-es. I have checked out some of the tutorials/samples, but most of them are pre-dated to 2009. I am familiar with game programming, and consider pong to be the hello-world! Right now, I intend to make it using their supplied SDK (2.3), but eventually I want to make it in NDK, so I can port my other work to android. Would anyone have a good reference for a starting point ? Thanks

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Collision Detection on floor tiles Isometric game

    - by Anivrom
    I am having a very hard to time figuring out a bug in my code. It should have taken me 20 minutes but instead I've been working on it for over 12 hours. I am writing a isometric tile based game where the characters can walk freely amongst the tiles, but not be able to cross over to certain tiles that have a collides flag. Sounds easy enough, just check ahead of where the player is going to move using a Screen Coordinates to Tile method and check the tiles array using our returned xy indexes to see if its collidable or not. if its not, then don't move the character. The problem I'm having is my Screen to Tile method isn't spitting out the proper X,Y tile indexes. This method works flawlessly for selecting tiles with the mouse. NOTE: My X tiles go from left to right, and my Y tiles go from up to down. Reversed from some examples on the net. Here's the relevant code: public Vector2 ScreentoTile(Vector2 screenPoint) { //Vector2 is just a object with x and y float properties //camOffsetX,Y are my camera values that I use to shift everything but the //current camera target when the target moves //tilescale = 128, screenheight = 480, the -46 offset is to center // vertically + 16 px for some extra gfx in my tile png Vector2 tileIndex = new Vector2(-1,-1); screenPoint.x -= camOffsetX; screenPoint.y = screenHeight - screenPoint.y - camOffsetY - 46; tileIndex.x = (screenPoint.x / tileScale) + (screenPoint.y / (tileScale / 2)); tileIndex.y = (screenPoint.x / tileScale) - (screenPoint.y / (tileScale / 2)); return tileIndex; } The method that calls this code is: private void checkTileTouched () { if (Gdx.input.justTouched()) { if (last.x >= 0 && last.x < levelWidth && last.y >= 0 && last.y < levelHeight) { if (lastSelectedTile != null) lastSelectedTile.setColor(1, 1, 1, 1); Sprite sprite = levelTiles[(int) last.x][(int) last.y].sprite; sprite.setColor(0, 0.3f, 0, 1); lastSelectedTile = sprite; } } if (touchDown) { float moveX=0,moveY=0; Vector2 pos = new Vector2(); if (player.direction == direction_left) { moveX = -(player.moveSpeed); moveY = -(player.moveSpeed / 2); Gdx.app.log("Movement", String.valueOf("left")); } else if (player.direction == direction_upleft) { moveX = -(player.moveSpeed); moveY = 0; Gdx.app.log("Movement", String.valueOf("upleft")); } else if (player.direction == direction_up) { moveX = -(player.moveSpeed); moveY = player.moveSpeed / 2; Gdx.app.log("Movement", String.valueOf("up")); } else if (player.direction == direction_upright) { moveX = 0; moveY = player.moveSpeed; Gdx.app.log("Movement", String.valueOf("upright")); } else if (player.direction == direction_right) { moveX = player.moveSpeed; moveY = player.moveSpeed / 2; Gdx.app.log("Movement", String.valueOf("right")); } else if (player.direction == direction_downright) { moveX = player.moveSpeed; moveY = 0; Gdx.app.log("Movement", String.valueOf("downright")); } else if (player.direction == direction_down) { moveX = player.moveSpeed; moveY = -(player.moveSpeed / 2); Gdx.app.log("Movement", String.valueOf("down")); } else if (player.direction == direction_downleft) { moveX = 0; moveY = -(player.moveSpeed); Gdx.app.log("Movement", String.valueOf("downleft")); } //Player.moveSpeed is 1 //tileObjects.x is drawn in the center of the screen (400px,240px) // the sprite width is 64, height is 128 testX = moveX * 10; testY = moveY * 10; testX += tileObjects.get(player.zIndex).x + tileObjects.get(player.zIndex).sprite.getWidth() / 2; testY += tileObjects.get(player.zIndex).y + tileObjects.get(player.zIndex).sprite.getHeight() / 2; moveX += tileObjects.get(player.zIndex).x + tileObjects.get(player.zIndex).sprite.getWidth() / 2; moveY += tileObjects.get(player.zIndex).y + tileObjects.get(player.zIndex).sprite.getHeight() / 2; pos = ScreentoTile(new Vector2(moveX,moveY)); Vector2 pos2 = ScreentoTile(new Vector2(testX,testY)); if (!levelTiles[(int) pos2.x][(int) pos2.y].collides) { Vector2 newPlayerPos = ScreentoTile(new Vector2(moveX,moveY)); CenterOnCoord(moveX,moveY); player.tileX = (int)newPlayerPos.x; player.tileY = (int)newPlayerPos.y; } } } When the player is moving to the left (downleft-ish from the viewers point of view), my Pos2 X values decrease as expected but pos2 isnt checking ahead on the x tiles, it is checking ahead on the Y tiles(as if we were moving DOWN, not left), and vice versa, if the player moves down, it will check ahead on the X values (as if we are moving LEFT, instead of DOWN). instead of the Y values. I understand this is probably the most confusing and horribly written post ever, but I'm confused myself so I'm having a hard time explaining it to others lol. if you need more information please ask!! I'm so frustrated after over 12 hours of working on it I'm about to give up.

    Read the article

  • Image with FadeIn effect blinks when added to scene

    - by Ef Es
    I am trying to add an image to the scene, but it should just be added to the scene invisible, FadeIn and then be deleted when the effect finishes. My problem is that the images blink once when they are added to the scene, then they do the intended effect. My best guess is that when they are added they show on the scene for a split second before starting the animation. I though of making them invisible for a split second before activating them, but I am not sure how to code it. const bool Sunbeams::add() { const CCSize kSceenSize = CCDirector::sharedDirector()->getWinSize(); const int nRayType = random( m_kRays.size()); const CCPoint kPosition( random( static_cast < int >( kSceenSize.width)), 0.0f); const float fDuration = random( m_fDurationVariance) + m_fDurationMin; CCSprite* pkLightBeam = CCSprite::spriteWithTexture( m_kRays[nRayType]); if ( !pkLightBeam) { msg::debug( "Sunbeams::add", "Failed to create sprite from ray '%d'!\n", m_kRays[nRayType]); return false; } pkLightBeam->setAnchorPoint( CCPointZero); pkLightBeam->setPosition( kPosition); m_kActiveBeams.push_back( pkLightBeam); CCDirector::sharedDirector()->getRunningScene()->addChild( pkLightBeam); CCActionInterval* pkAction = CCFadeIn::actionWithDuration( fDuration); CCActionInterval* pkActionBack = pkAction->reverse(); pkLightBeam->runAction( CCSequence::actions( pkAction, pkActionBack, 0)); return true; }

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • How can I import models from Blender into jMonkeyEngine?

    - by Nathan Sabruka
    I have some blender model files (Blender version 2.6) which I would like to use with the jMonkeyEngine SDK. However, when I use Blender's native .obj exporter, I can't import it in jMonkeyEngine (the model simply fails to import or looks messed up). I've tried importing .obj files or .blend files directly into the jMonkeyEngine SDK to no avail. I've also tried to use various OGRE exporters to export .scene and .material files, but only the .scene file is created. Is there a simple way to simply export files from Blender into the jMonkeyEngine SDK? EDIT: I seem to have found something in Blender. When I go under addons, there's a warning in the OGRE exporter; "'.mesh' output requires OgreCommandLineTools". However, I have already installed those tools under the C drive. Has anyone else encountered this issue?

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • Moving my sprite in XNA using classes

    - by Tom
    Hey, im a newbie at this programming lark and its really frustrating me. I'm trying to move a snake in all directions while using classes. Ive created a vector2 for speed and ive attempted creating a method which moves the snake within the snake class. Now I'm confused and not sure what to do next. Appreciate any help. Thanks :D This is what i've done in terms of the method... public Vector2 direction() { Vector2 inputDirection = Vector2.Zero; if (Keyboard.GetState().IsKeyDown(Keys.Left)) inputDirection.X -= -1; if (Keyboard.GetState().IsKeyDown(Keys.Right)) inputDirection.X += 1; if (Keyboard.GetState().IsKeyDown(Keys.Up)) inputDirection.Y -= -1; if (Keyboard.GetState().IsKeyDown(Keys.Down)) inputDirection.Y += 1; return inputDirection * snakeSpeed; } Appreciate any help. Thanks :D EDIT: Well let me make everything clear. Im making a small basic game for an assignment. The game is similar to the old snake game on the old Nokia phones. I've created a snake class (even though I'm not sure whether this is needed because im only going to be having one moving sprite within the game). After I written the code above (in the snake class), the game ran with no errors but I couldn't actually move the image :( EDIT2: Thanks so much for everyones responses!!

    Read the article

  • Arrive steering behavior

    - by dbostream
    I bought a book called Programming game AI by example and I am trying to implement the arrive steering behavior. The problem I am having is that my objects oscillate around the target position; after oscillating less and less for awhile they finally come to a stop at the target position. Does anyone have any idea why this oscillating behavior occur? Since the examples accompanying the book are written in C++ I had to rewrite the code into C#. Below is the relevant parts of the steering behavior: private enum Deceleration { Fast = 1, Normal = 2, Slow = 3 } public MovingEntity Entity { get; private set; } public Vector2 SteeringForce { get; private set; } public Vector2 Target { get; set; } public Vector2 Calculate() { SteeringForce.Zero(); SteeringForce = SumForces(); SteeringForce.Truncate(Entity.MaxForce); return SteeringForce; } private Vector2 SumForces() { Vector2 force = new Vector2(); if (Activated(BehaviorTypes.Arrive)) { force += Arrive(Target, Deceleration.Slow); if (!AccumulateForce(force)) return SteeringForce; } return SteeringForce; } private Vector2 Arrive(Vector2 target, Deceleration deceleration) { Vector2 toTarget = target - Entity.Position; double distance = toTarget.Length(); if (distance > 0) { //because Deceleration is enumerated as an int, this value is required //to provide fine tweaking of the deceleration.. double decelerationTweaker = 0.3; double speed = distance / ((double)deceleration * decelerationTweaker); speed = Math.Min(speed, Entity.MaxSpeed); Vector2 desiredVelocity = toTarget * speed / distance; return desiredVelocity - Entity.Velocity; } return new Vector2(); } private bool AccumulateForce(Vector2 forceToAdd) { double magnitudeRemaining = Entity.MaxForce - SteeringForce.Length(); if (magnitudeRemaining <= 0) return false; double magnitudeToAdd = forceToAdd.Length(); if (magnitudeToAdd > magnitudeRemaining) magnitudeToAdd = magnitudeRemaining; SteeringForce += Vector2.NormalizeRet(forceToAdd) * magnitudeToAdd; return true; } This is the update method of my objects: public void Update(double deltaTime) { Vector2 steeringForce = Steering.Calculate(); Vector2 acceleration = steeringForce / Mass; Velocity = Velocity + acceleration * deltaTime; Velocity.Truncate(MaxSpeed); Position = Position + Velocity * deltaTime; } If you want to see the problem with your own eyes you can download a minimal example here. Thanks in advance.

    Read the article

  • Sprite not rotating around its centre after Scaling at its centre

    - by asma.farhat
    If I scale a sprite at its centre, then try to rotate it around its centre as well, the rotation does not occur around its centre. If you need to rotate, for example a scaled ball,the way its working it is set the scale center at the top left (0,0) set the scale that you want, and then set the rotation center to the middle of the scaled sprite, and then apply the rotation modifier. blaBloBliSprite.setScaleCenter(0, 0); blaBloBliSprite.setScale(0.667f); blaBloBliSprite.setPosition(557, CAMERA_HEIGHT / 2 - blaBloBliSprite.getHeightScaled() / 2); blaBloBliSprite.setRotationCenter(blaBloBliSprite.getWidthScaled() / 2, blaBloBliSprite.getHeightScaled() / 2); But I want to scale a sprite at its centre as well. Is there any way of doing it?

    Read the article

  • Hero/Character sprite size in comparison to tile size?

    - by Kid
    So I'm making this simple platformer where the Hero is 16x16 in size, but also, the tile size is 16x16. Which sounds fine right? But my game window/world is 800x416, which makes the Hero is really really tiny in comparison. This really surprised me, but given Ive never made a platformer before it is also a new discovery. Is there a rule set for scale in platformer games? I'd like to have my game window remain the size it is (800x416), cause the game involves large levels. But how big should my hero be? I hope I was clear with the question, and I appreciate any insight. Thanks

    Read the article

  • How to make an object stay relative to another object

    - by Nick
    In the following example there is a guy and a boat. They have both a position, orientation and velocity. The guy is standing on the shore and would like to board. He changes his position so he is now standing on the boat. The boat changes velocity and orientation and heads off. My character however has a velocity of 0,0,0 but I would like him to stay onboard. When I move my character around, I would like to move as if the boat was the ground I was standing on. How do keep my character aligned properly with the boat? It is exactly like in World Of Warcraft, when you board a boat or zeppelin. This is my physics code for the guy and boat: this.velocity.addSelf(acceleration.multiplyScalar(dTime)); this.position.addSelf(this.velocity.clone().multiplyScalar(dTime)); The guy already has a reference to the boat he's standing on, and thus knows the boat's position, velocity, orientation (even matrices or quaternions can be used).

    Read the article

  • problem in array of shooter sprites which contain different colour bubbles

    - by prakash s
    everyone i am developing bubble shooter game in cocos2d I have placed shooter array which contain different color bubbles like this 00000000 it is 8 bubbles array if i tap the screen, first bubbles should move for shooting the target .png .And if i again tap the screen again 2nd position bubble should move for shooting the target.png bubbles,how it will possible for me because i have already created the array of target which contain different color bubbles, here i write the code : - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { // Choose one of the touches to work with UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; // Set up initial location of projectile CGSize winSize = [[CCDirector sharedDirector] winSize]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png",@"6.png",@"7.png", @"8.png", nil]; for(int i = 0; i < images.count; ++i) { int index = (arc4random() % 8)+1; NSString *image = [NSString stringWithFormat:@"%d.png", index]; CCSprite*projectile = [CCSprite spriteWithFile:image]; //CCSprite *projectile = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0,256,256)]; [self addChild:projectile]; [movableSprites addObject:projectile]; float offsetFraction = ((float)(i+1))/(images.count+1); //projectile.position = ccp(20, winSize.height/2); //projectile.position = ccp(18,0 ); //projectile.position = ccp(350*offsetFraction, 20.0f); projectile.position = ccp(10/offsetFraction, 20.0f); // projectile.position = ccp(projectile.position.x,projectile.position.y); // Determine offset of location to projectile int offX = location.x - projectile.position.x; int offY = location.y - projectile.position.y; // Bail out if we are shooting down or backwards if (offX <= 0) return; // Ok to add now - we've double checked position //[self addChild:projectile]; // Determine where we wish to shoot the projectile to int realX = winSize.width + (projectile.contentSize.width/2); float ratio = (float) offY / (float) offX; int realY = (realX * ratio) + projectile.position.y; CGPoint realDest = ccp(realX, realY); // Determine the length of how far we're shooting int offRealX = realX - projectile.position.x; int offRealY = realY - projectile.position.y; float length = sqrtf((offRealX*offRealX)+(offRealY*offRealY)); float velocity = 480/1; // 480pixels/1sec float realMoveDuration = length/velocity; // Move projectile to actual endpoint [projectile runAction:[CCSequence actions: [CCMoveTo actionWithDuration:realMoveDuration position:realDest], [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)], nil]]; // Add to projectiles array projectile.tag = 1; [_projectiles addObject:projectile]; } }

    Read the article

  • Box2D Bicycle Wheels Motor Problem - Flash 2.1a

    - by Craig
    I have made a bicycle with Box2D using several polygons for the frame at different angles connected using weld joints, and I have revolute joints on the wheels with a motor. I have made some basic terrain (straight ground and a small ramp) and added keyboard input to control the bicycle with torque to balance it. All of this is done in with Box2D's Debug Draw. When the bicycle is on its back wheel but diagonally forward (kinda like this position - /) the motors just cause it go spinning backwards over when in reality it should either stay on its back wheel or go down onto both wheels. Here's my code the revolute joints: //Front Wheel Joint var frontWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); frontWheelJointDef.Initialize(frontWheelBody, secondFrameBody, frontWheelBody.GetWorldCenter()); frontWheelJointDef.enableMotor=true; frontWheelJointDef.maxMotorTorque=10000; frontWheelJoint = _world.CreateJoint(frontWheelJointDef) as b2RevoluteJoint; //Rear Wheel Joint var rearWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); rearWheelJointDef.Initialize(rearWheelBody, firstFrameBody, rearWheelBody.GetWorldCenter()); rearWheelJointDef.enableMotor=true; rearWheelJointDef.maxMotorTorque=10000; rearWheelJoint = _world.CreateJoint(rearWheelJointDef) as b2RevoluteJoint; And here's the relevant part of my update function: // up and down control wheels motor if (up) { motorSpeed-=0.5; } if (down) { motorSpeed += 0.5; } // left and right control cart torque if (left) { middleCentreFrameBody.ApplyTorque( -3); gearBody.ApplyTorque( -3); firstFrameBody.ApplyTorque( -3); secondFrameBody.ApplyTorque( -3); rearWheelToChainBody.ApplyTorque( -3); chainToFrontFrameBody.ApplyTorque( -3); topMiddleFrameBody.ApplyTorque( -3); } if (right) { middleCentreFrameBody.ApplyTorque( 3); gearBody.ApplyTorque( 3); firstFrameBody.ApplyTorque( 3); secondFrameBody.ApplyTorque( 3); rearWheelToChainBody.ApplyTorque( 3); chainToFrontFrameBody.ApplyTorque( 3); topMiddleFrameBody.ApplyTorque( 3); } // motor friction motorSpeed*=0.99; // motor max speed if (motorSpeed>100) { motorSpeed=100; } rearWheelJoint.SetMotorSpeed(motorSpeed); frontWheelJoint.SetMotorSpeed(motorSpeed); Any ideas what might be causing this? Thanks

    Read the article

  • PCF shadow shader math causing artifacts

    - by user2971069
    For a while now I used PCSS for my shadow technique of choice until I discovered a type of percentage closer filtering. This method creates really smooth shadows and with hopes of improving performance, with only a fraction of texture samples, I tried to implement PCF into my shader. This is the relevant code: float c0, c1, c2, c3; float f = blurFactor; float2 coord = ProjectedTexCoords; if (receiverDistance - tex2D(lightSampler, coord + float2(0, 0)).x > 0.0007) c0 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, 0)).x > 0.0007) c1 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(0, f)).x > 0.0007) c2 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, f)).x > 0.0007) c3 = 1; coord = (coord % f) / f; return 1 - (c0 * (1 - coord.x) * (1 - coord.y) + c1 * coord.x * (1 - coord.y) + c2 * (1 - coord.x) * coord.y + c3 * coord.x * coord.y); This is a very basic implementation. blurFactor is initialized with 1 / LightTextureSize. So the if statements fetch the occlusion values for the four adjacent texels. I now want to weight each value based on the actual position of the texture coordinate. If it's near the bottom-right pixel, that occlusion value should be preferred. The weighting itself is done with a simple bilinear interpolation function, however this function takes a 2d vector in the range [0..1] so I have to convert my texture coordinate to get the distance from my first pixel to the second one in range [0..1]. For that I used the mod operator to get it into [0..f] range and then divided by f. This code makes sense to me, and for specific blurFactors it works, producing really smooth one pixel wide shadows, but not for all blurFactors. Initially blurFactor is (1 / LightTextureSize) to sample the 4 adjacent texels. I now want to increase the blurFactor by factor x to get a smooth interpolation across maybe 4 or so pixels. But that is when weird artifacts show up. Here is an image: Using a 1x on blurFactor produces a good result, 0.5 is as expected not so smooth. 2x however doesn't work at all. I found that only a factor of 1/2^n produces an good result, every other factor produces artifacts. I'm pretty sure the error lies here: coord = (coord % f) / f; Maybe the modulo is not calculated correctly? I have no idea how to fix that. Is it even possible for pixel that are further than 1 pixel away?

    Read the article

  • "Accumulate" buffer results in XNA4?

    - by Utkarsh Sinha
    I'm trying to simulate a "heightmap" buffer in XNA4.0 but the results don't look correct. Here's what I'm hoping to achieve: http://www.youtube.com/watch?feature=player_detailpage&v=-Q6ISVaM5Ww#t=517s (8:38). From what I understand, here are the steps to reach there: Pass height buffer + current entity's heightmap Generate a stencil and update the height buffer Render sprite+stencil For now, I'm just trying to get the height buffer thing to work. So here's the problem. Inside the draw loop, I do the following: Create a new render target & set it Draw the heightmap with a sprite batch(no shaders) graphicsDevice.SetRenderTarget(null) Draw the rendertarget with SpriteBatch I expected to see all entities' heightmaps. But only the last entity's heightmap is visible. Any hints on what I'm doing wrong? Here's the code inside the draw loop: RenderTarget2D tempDepthStencil = new RenderTarget2D(graphicsDevice, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, false, graphicsDevice.DisplayMode.Format, DepthFormat.None); graphicsDevice.SetRenderTarget(tempDepthStencil); // Gather depth information SpriteBatch depthStencilSpriteBatch = new SpriteBatch(graphicsDevice); depthStencilSpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullCounterClockwise); depthStencilSpriteBatch.Draw(texHeightmap, pos, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); depthStencilSpriteBatch.End(); graphicsDevice.SetRenderTarget(null); SpriteBatch b1 = new SpriteBatch(graphicsDevice); b1.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null); b1.Draw((Texture2D)tempDepthStencil, Vector2.Zero, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); b1.End();

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >