Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 411/863 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Complete Guide/Tutorials on LWJGL?

    - by user43353
    Dont get me wrong, I finished these tutorials on http://lwjgl.org/wiki/index.php?title=Main_Page. I finished The Basics section, OpenGL 3.2 and newer section, and I looked at the Example Code section. They were great tutorials, and I have looked at the external tutorials as well. I don't know where to go from here, and OpenGL is not my strong point. Some one suggested Learning Modern 3D Graphics Programming, and I didnt learn much. I looked at the port to LWJGL, but the book was on C and I couldn't really understand what the OpenGL meant. I am trying to learn 2D gaming, not 3D. Maybe later. Is there any tutorials that aren't C/C++ heavy and teach you 2D OpenGL?

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Is it ok to initialize an RB_ConstraintActor in PostBeginPlay?

    - by Almo
    I have a KActorSpawnable subclass that acts weird. In PostBeginPlay, I initialize an RB_ConstraintActor; the default is not to allow rotation. If I create one in the editor, it's fine, and won't rotate. If I spawn one, it rotates. Here's the class: class QuadForceKActor extends KActorSpawnable placeable; var(Behavior) bool bConstrainRotation; var(Behavior) bool bConstrainX; var(Behavior) bool bConstrainY; var(Behavior) bool bConstrainZ; var RB_ConstraintActor PhysicsConstraintActor; simulated event PostBeginPlay() { Super.PostBeginPlay(); PhysicsConstraintActor = Spawn(class'RB_ConstraintActorSpawnable', self, '', Location, rot(0, 0, 0)); if(bConstrainRotation) { PhysicsConstraintActor.ConstraintSetup.bSwingLimited = true; PhysicsConstraintActor.ConstraintSetup.bTwistLimited = true; } SetLinearConstraints(bConstrainX, bConstrainY, bConstrainZ); PhysicsConstraintActor.InitConstraint(self, None); } function SetLinearConstraints(bool InConstrainX, bool InConstrainY, bool InConstrainZ) { if(InConstrainX) { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 0; } if(InConstrainY) { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 0; } if(InConstrainZ) { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 0; } } DefaultProperties { bConstrainRotation=true bConstrainX=false bConstrainY=false bConstrainZ=false bSafeBaseIfAsleep=false bNoEncroachCheck=false } Here's the code I use to spawn one. It's a subclass of the one above, but it doesn't reference the constraint at all. local QuadForceKCreateBlock BlockActor; BlockActor = spawn(class'QuadForceKCreateBlock', none, 'PowerCreate_Block', BlockLocation(), m_PreparedRotation, , false); BlockActor.SetDuration(m_BlockDuration); BlockActor.StaticMeshComponent.SetNotifyRigidBodyCollision(true); BlockActor.StaticMeshComponent.ScriptRigidBodyCollisionThreshold = 0.001; BlockActor.StaticMeshComponent.SetStaticMesh(m_ValidCreationBlock.StaticMesh); BlockActor.StaticMeshComponent.AddImpulse(m_InitialVelocity); I used to initialize an RB_ConstraintActor where I spawned it from the outside. This worked, which is why I'm pretty sure it has nothing to do with the other code in QuadForceKCreateBlock. I then added the internal constraint in QuadForceKActor for other purposes. When I realized I had two constraints on the CreateBlock doing the same thing, I removed the constraint code from the place where I spawn it. Then it started rotating. Is there a reason I should not be initializing an RB_ConstraintActor in PostBeginPlay? I feel like there's some basic thing about how the engine works that I'm missing.

    Read the article

  • Slick 2d scrolling off screen

    - by Peter
    I have something scrolling in and out of the screen. Now when it goes off screen, I want it to scroll into the screen at another location. What I do is I grab the last pixels at the screens edge using g.copyArea and then g.drawImage on the edge of the screen. And then I do a g.translate to create room for the next row which is next render cycle. My problem is that I get a single pixel row, which is not copied onto the canvas. Where as I want each row to be added and then translated, so that the image that scrolled off screen is recreated on the other side of the screen. Here is my code, maybe there is a better way of doing this, open to any suggests, cause I'm totally stuck @Override public void render(GameContainer gc, Graphics g) throws SlickException { //g.setClip(0, 0, 300, gc.getHeight()); g.translate(0, y); g.drawImage(image,0,200); g.resetTransform(); //g.clearClip(); g.copyArea(rightImage, 0, gc.getHeight() - 1); g.drawImage(rightImage, 300, 0); g.translate(0, y); y=y+3; }

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • How to implement explosion in OpenGL?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Sprite Animation in Android with OpenGL ES

    - by lijo john
    How to do a sprite animation in android using OpenGL ES? What i have done : Now I am able to draw a rectangle and apply my texture(Spritesheet) to it What I need to know : Now the rectangle shows the whole sprite sheet as a whole How to show a single action from sprite sheet at a time and make the animation It will be very help full if anyone can share any idea's , links to tutorials and suggestions. Advanced Thanks to All

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • Interpolating between two networked states?

    - by Vaughan Hilts
    I have many entities on the client side that are simulated (their velocities are added to their positions on a per frame basis) and I let them dead reckon themselves. They send updates about where they were last seen and their velocity changes. This works great and other players see this work find. However, after a while these players begin to desync after some time. This is because of latency. I'd like to know how I can interpolate between states so they appear to be in the correct position. I know where the player was LAST seen and their current velocity but interpolating to the last seen state causes the player to actually move -backwards-. I could not use velocity at all for other clients and simply 'lerp' them towards the appropriate direction but I feel this would cause jaggy movement. What are the alternatives?

    Read the article

  • XNA frame rate spikes in full screen mode

    - by ProgrammerAtWork
    I'm loading a simple texture and rotating it in XNA, and this works. But when I run it in full screen 1920x1080 mode I see spikes while my texture is rotating. If I run it windowed with 1920x1080 resolution, I don't get the spikes. The size of the texture does not seem to matter, I tried 512 texture size and 2048 texture size, same thing happens. Spikes in full screen, no spikes in windowed, resolution does not seem to matter, Debug or Release does not seem to do anything either. Anyone got ideas of what could be the problem? Edit: I think this problem has something to do with the vertical retrace. Set this property: _graphicsDeviceManager.SynchronizeWithVerticalRetrace = false; you'll lose vsync but it will not stutter.

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • What is causing these visual artifacts on my OpenGL sprites?

    - by Amplify91
    What could be the cause of the defects in my characters sprite? I am using OpenGL ES 2.0. I draw my sprites in a sprite batch that uses UV coordinates from one large texture atlas. If you look around the character' edges, you'll see two noticeable problems: The invisible alpha background is not invisible, but shows a strange static-like background. There are unwanted streaks where the character nears the edge of the frame (but only in some frames of the animation, this happened to be one of them). Any idea what could be causing these? I will provide related code if asked for, but I'll try to avoid just dumping the entire project and expecting someone to look through it all. EDIT: Here's a bit of code: This is how I generate my UV coordinates: private float[] createFrameUV(int frameWidth, int frameHeight, int x, int y){ float[] uv = new float[4]; if(numberOfFrames>1){ float width = (float)frameWidth / (float)mBitmap.getWidth(); float height = (float)frameHeight / (float)mBitmap.getHeight(); float u = (float)x / (float)mBitmap.getWidth(); float v = (float)y / (float)mBitmap.getHeight(); uv[0] = u; uv[1] = v; uv[2] = u + width; uv[3] = v + height; }else{ uv[0] = 0f; uv[1] = 0f; uv[2] = 1f; uv[3] = 1f; } return uv; } These are some OpenGL settings: GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

    Read the article

  • Stop a rotating object at a specified angle?

    - by Krummelz
    I'm working in JavaScript with HTML5 and the canvas. I have an object which is rotating at a certain speed, and I need the object's rotation to slow down gradually and the front of the object to stop at a specified angle. (I'm using radians, not degrees.) I have a variable to keep track of the angle which the object is facing, as it rotates. How would I go about getting the object to come to rest, facing the direction I want it to?

    Read the article

  • Issue with a point coordinates, which creates an unwanted triangle

    - by Paul
    I would like to connect the points from the red path, to the y-axis in blue. I figured out that the problem with my triangles came from the first point (V0) : it is not located where it should be. In the console, it says its location is at 0,0, but in the emulator, it is not. The code : for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } The output : _polyVertices[i-1].x : 0.000000, _polyVertices[i-1].y : 0.000000 _polyVertices[i].x : 50.000000, _polyVertices[i].y : 0.000000 And the result : (the layer goes up, i could not take the screenshot before the layer started to go up, but the first red point starts at y=0) : Then it creates an unwanted triangle when the code continues : Would you have any idea about this? (So to force the first blue point to start at 0,0, and not at 50,0 as it seems to be now) Here is the code : - (void)generatePath{ float x = 50; //first red point float y = 0; for(int i = 0; i < kMaxKeyPoints+1; i++) { if (i<3){ _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<20){ //going right _hillKeyPoints[i] = CGPointMake(x, y); x += (random() % (int) 30); y += -40; } else if(i<25){ //stabilize _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<30){ //going left _hillKeyPoints[i] = CGPointMake(x, y); //x -= (random() % (int) 10); x = 150 + (random() % (int) 30); y += -40; } else { //back to normal _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } } } -(void)generatePolygons{ static int prevFromKeyPointI = -1; static int prevToKeyPointI = -1; // key points interval for drawing while (_hillKeyPoints[_fromKeyPointI].y > -_offsetY+winSizeTop) { _fromKeyPointI++; } while (_hillKeyPoints[_toKeyPointI].y > -_offsetY-winSizeBottom) { _toKeyPointI++; } if (prevFromKeyPointI != _fromKeyPointI || prevToKeyPointI != _toKeyPointI) { _nPolyVertices = 0; float x1 = 0; int keyPoints = _fromKeyPointI; for (int i=_fromKeyPointI; i<_toKeyPointI; i++){ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); //first blue point _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //CCLOG(@"_nPolyVertices V2 again : %i", _nPolyVertices); //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } prevFromKeyPointI = _fromKeyPointI; prevToKeyPointI = _toKeyPointI; } } - (void) draw { //RED glColor4f(1, 1, 1, 1); for(int i = MAX(_fromKeyPointI, 1); i <= _toKeyPointI; ++i) { glColor4f(1.0, 0, 0, 1.0); ccDrawLine(_hillKeyPoints[i-1], _hillKeyPoints[i]); } //BLUE glColor4f(0, 0, 1, 1); for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } } Thanks

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • Importing 3d model with multiple skeletons

    - by Sweta Dwivedi
    I have created an animated butterfly in 3ds Max and try to export it in ".fbx" format to use in XNA, however as soon as I compile, i get the following Errors: Warning 1 Multiple skeletons were found in the file. The first skeleton, named "Left.Wing" has been moved to be a child of the scene root. The other, "Right.Wing", will be ignored. Fragment identifier "Right.Wing". Error 2 Vertex is bound to bone "Right.Wing", but this bone is not present in the skeleton. Which is confusing since I have the bone Right.Wing . . and I use it to animate the butterfly I have seen a few possible solution for Blender but none for 3Ds max it would be really helpful if someone could help me out with this

    Read the article

  • How stoper one annimation model on XNA?

    - by Mehdi Bugnard
    I met a Difficulty for one stoper annimation. Everything works great starter for the animation. But I do not see how stoper and can continue the annimation paused. The "animationPlayer.StartClip (clip)" is used to choke the annimation but impossible to find a way to stoper Thans's a lot Here is my code to use. protected override void LoadContent() { //Model - Player model_player = Content.Load<Model>("Models\\Player\\models"); // Look up our custom skinning information. SkinningData skinningData = model_player.Tag as SkinningData; if (skinningData == null) throw new InvalidOperationException ("This model does not contain a SkinningData tag."); // Create an animation player, and start decoding an animation clip. animationPlayer = new AnimationPlayer(skinningData); AnimationClip clip = skinningData.AnimationClips["ArmLowAction_006"]; animationPlayer.StartClip(clip); } protected overide update(GameTime gameTime) { KeyboardState key = Keyboard.GetState(); // If player don't move -> stop anim if (!key.IsKeyDown(Keys.W) && !keyStateOld.IsKeyUp(Keys.S) && !keyStateOld.IsKeyUp(Keys.A) && !keyStateOld.IsKeyUp(Keys.D)) { //animation stop ? not exist ? animationPlayer.Stop(); isPlayerStop = true; } else { if(isPlayerStop == true) { isPlayerStop = false; animationPlayer.StartClip(Clip); } }

    Read the article

  • Proportional speed movement between mouse and cube

    - by user1350772
    Hi i´m trying to move a cube with the freeglut mouse "glutMotionFunc(processMouseActiveMotion)" callback, my problem is that the movement is not proportional between the mouse speed movement and the cube movement. MouseButton function: #define MOVE_STEP 0.04 float g_x=0.0f; glutMouseFunc(MouseButton); glutMotionFunc(processMouseActiveMotion); void MouseButton(int button, int state, int x, int y){ if(button == GLUT_LEFT_BUTTON && state== GLUT_DOWN){ initial_x=x; } } When the left button gets clicked the x cordinate is stored in initial_x variable. void processMouseActiveMotion(int x,int y){ if(x>initial_x){ g_x-= MOVE_STEP; }else{ g_x+= MOVE_STEP; } initial_x=x; } When I move the mouse I look in which way it moves comparing the mouse new x coordinate with the initial_x variable, if xinitial_x the cube moves to the right, if not it moves to the left. Any idea how can i move the cube according to the mouse movement speed? Thanks EDIT 1 The idea is that when you click on any point of the screen and you drag to the left/right the cube moves proportionally of the mouse mouvement speed.

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Set a drawing viewport while using camera

    - by Mariano
    I'm working with XNA. I already have a basic world made of tiles and a camera using a transform matrix. I have a character moving around and the camera follows. What I want to do now is draw the map only on a certain part of the screen as shown on the figure below. This way I can move the map to the left of the screen and have the other fixed parts shift to the right. Do I need to modify the camera matrix? Make a new viewport?

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >