Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 468/1130 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Light mask map and camera for static lights in XNA Platformer

    - by JiminyCricket
    Using the example for some basic light maps found here : http://blog.josack.com/2011/07/xna-2d-dynamic-lighting.html, I've managed to create a lightmap texture using individual lightmaps and display it over a 2D tiled world as in the Platformer example. I'm using the very basic 2D camera example as found here : http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/, and the problem is that the lightmap texture scrolls with the player sprite. This looks pretty good and would be excellent for lighting the player sprite as it moves. But, I also want to be able to place static lights (or some initial position for the lights) that do not move with the player or camera. When I turn off the camera or give it a static position, it works as a series of static lights so I believe it's probably caused by the camera transformation matrix following the player around. I'm using RenderTarget2Ds, one for the main game screen after all the backgrounds and tiles are rendered, and one for the "lightmap" which consists of a black background and a bunch of lighting textures which are merged with it using additive blending. For now, I'm doing all of this in PlatformerGame.cs where the camera transformation and position is set and the level.Draw() call is made. I can't figure out how to separate the drawing of the lightmap and the camera following the player. I was thinking it would be better to render the shadows and lighting directly in the drawing of the level itself, but I'm not sure how to do that either because this technique requires RenderTarget2Ds and calling SpriteBatch.Begin()/End().

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • SEHException throw using Microsoft XACT Audio Framework (XACT3)

    - by Sweta Dwivedi
    I have been developing a game using Kinect + XNA and using Microsoft Audio Creation tool (XACT3) for managing my sound files and music, however in the code an SEHException is thrown whenever it tries to get the wave file from the wave Bank . . Sometimes the code works magically and all of a sudden it will start throwing this exception randomly ..I need a help on solving this exception /*Declaring Audio Engine for music*/ AudioEngine engine; SoundBank soundBank; WaveBank waveBank; Cue cue; /*Declaring Audio engine for sound effects*/ AudioEngine engine1; SoundBank soundbank; WaveBank wavebank; Cue effect; engine = new AudioEngine(@"Content\therapy.xgs"); soundBank = new SoundBank(engine, @"Content\Sound Bank.xsb"); **waveBank = new WaveBank(engine, @"Content\Wave Bank.xwb");** cue = null; engine1 = new AudioEngine(@"Content\Music_Manager\Sound_effects.xgs"); soundbank = new SoundBank(engine1, @"Content\Music_Manager\Sound1.xsb"); **wavebank = new WaveBank(engine1, @"Content\Music_Manager\Wave1.xwb");** effect = null; cue = soundBank.GetCue("hypnotizing"); cue.Play();

    Read the article

  • 2D Grid based game - how should I draw grid lines?

    - by Adam K Dean
    I'm playing around with a 2D grid based game idea, and I am using sprites for the grid cells. Let's say there is a 10 x 10 grid and each cell is 48x48, which will have sprites drawn there. That is fine. But in design mode, I'd like to have a grid overlay the screen. I can do this either with sprites (2x600 pixel image etc) or with primitives, but which is best? Should I really be switching between sprites and 3d/2d rendering? Like so: Thanks!

    Read the article

  • How to proceed on the waypoint path?

    - by Alpha Carinae
    I'm using Dijkstra algorithm to find shortest path and I'm drawing this path on the screen. As the character object moves on, path updates itself(shortens as the object approaches the target and gets longer as the object moves away from it.) I tried to visualize my problem. This is the beginning state. 'A' node is the target, path is the blue and the object is the green one. I draw this path, from object to the closest node. In this case my problem occurs. Because 'D' node is more closer to the object than 'C' node, something like this happens: So, how can i decide that the object passed the 'D' node? Path should be look like this: One thing comes to my mind is that I use some distance variables between the two closest nodes in the route path. (In this example these are 'C' and 'D' nodes.) As the object approaches 'C' and moves away from the 'D' node at the same time, this means character passed the 'D'. However, I think there are some standardized and easy ways to solve this. What approach should I take?

    Read the article

  • Simulating smooth movement along a line after calculating a collision containing a restitution of zero in 2D

    - by Casey
    [for tl;dr see after listing] //...Code to determine shapes types involved in collision here... //...Rectangle-Line collision detected. if(_rbTest->GetCollisionShape()->Intersects(*_ground->GetCollisionShape())) { //Convert incoming shape to a line. a2de::Line l(*dynamic_cast<a2de::Line*>(_ground->GetCollisionShape())); //Get line's normal. a2de::Vector2D normal_vector(l.GetSlope().GetY(), -l.GetSlope().GetX()); a2de::Vector2D::Normalize(normal_vector); //Accumulate forces involved. a2de::Vector2D intermediate_forces; a2de::Vector2D normal_force = normal_vector * _rbTest->GetMass() * _world->GetGravityHandler()->GetGravityValue(); intermediate_forces += normal_force; //Calculate final velocity: See [1] double Ma = _rbTest->GetMass(); a2de::Vector2D Ua = _rbTest->GetVelocity(); double Mb = _ground->GetMass(); a2de::Vector2D Ub = _ground->GetVelocity(); double mCr = Mb * _ground->GetRestitution(); a2de::Vector2D collision_velocity( ((Ma * Ua) + (Mb * Ub) + ((mCr * Ub) - (mCr * Ua))) / (Ma + Mb)); //Calculate reflection vector: See [2] a2de::Vector2D reflect_velocity( -collision_velocity + 2 * (a2de::Vector2D::DotProduct(collision_velocity, normal_vector)) * normal_vector ); //Affect velocity to account for restitution of colliding bodies. reflect_velocity *= (_ground->GetRestitution() * _rbTest->GetRestitution()); _rbTest->SetVelocity(reflect_velocity); //THE ULTIMATE ISSUE STEMS FROM THE FOLLOWING LINE: //Move object away from collision one pixel to prevent constant collision. _rbTest->SetPosition(_rbTest->GetPosition() + normal_vector); _rbTest->ApplyImpulse(intermediate_forces); } Sources: (1) Wikipedia: Coefficient of Restitution: Speeds after impact (2) Wikipedia: Specular Reflection: Direction of reflection First, I have a system in place to account for friction (that is, a coefficient of friction) but is not used right now (in addition, it is zero, which should not affect the math anyway). I'll deal with that after I get this working. Anyway, when the restitution of either object involved in the collision is zero the object stops as required, but if movement along the same direction (again, irrespective of the friction value that isn't used) as the line is attempted the object moves as if slogging through knee deep snow. If I remove the line of code in question and the object is not push away one pixel the object barely moves at all. All because the object collides, is stopped, is pushed up, collides, is stopped...etc. OR collides, is stopped, collides, is stopped, etc... TL;DR How do I only account for a collision ONCE for restitution purposes (BONUS: but CONTINUALLY for frictional purposes, to be implemented later)

    Read the article

  • Best way to prevent UIPanGestureRecognizer from firing when moving sprites in cocos2d

    - by cjroebuck
    Im using UIPanGestureRecognizer in my cocos2d game to do drag and drop of sprites. I have a row of sprites and when I drag a sprite on top of another one, the sprite underneath it and any other sprites between should shift left or right out of the way to allow space to drop the currently selected sprite. This is working ok, however, if I am too quick at dragging the sprite around the screen, this triggers another round of the UIPanGestureRecognizer's callback method, and screws up the logic, as the sprites are in-between shifting. I need a way to freeze the callback from firing, whilst the other sprites are shifting, then once they have finished moving, re-enable the callback to fire. Whats the best way to do this?

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • Pygame: Save a list of objects/classes/surfaces

    - by Sam Tubb
    I am working on a game, in which you can create mazes. You place blocks on a 16x16 grid, while choosing from a variety of block to make the level with. Whenever you create a block, it adds this class: class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) to a list called instances. I tried shelving it to a .bin file, but it returns some error dealing with surfaces. How can I go about saving and loading levels? Any help is appreciated! :) Here is the whole code for reference: import pygame from pygame.locals import * #initstuff pygame.init() screen=pygame.display.set_mode((640,480)) pygame.display.set_caption('PiMaze') instances=[] #loadsprites menuspr=pygame.image.load('images/menu.png').convert() b1spr=pygame.image.load('images/b1.png').convert() b2spr=pygame.image.load('images/b2.png').convert() currentbspr=b1spr curspr=pygame.image.load('images/curs.png').convert() curspr.set_colorkey((0,255,0)) #menu menuspr.set_alpha(185) menurect=menuspr.get_rect(x=-260,y=4) class MenuItem(object): def __init__(self,pos,spr): self.x=pos[0] self.y=pos[1] self.sprite=spr self.pos=(self.x,self.y) self.rect=self.sprite.get_rect(x=self.x,y=self.y) class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) while True: #menu items b1menu=b1spr.get_rect(x=menurect.left+32,y=48) b2menu=b2spr.get_rect(x=menurect.left+64,y=48) menuitems=[MenuItem(b1menu,b1spr),MenuItem(b2menu,b2spr)] screen.fill((20,30,85)) mse=pygame.mouse.get_pos() key=pygame.key.get_pressed() placepos=((mse[0]/16)*16,(mse[1]/16)*16) if key[K_q]: if mse[0]<260: if menurect.right<255: menurect.right+=1 else: if menurect.left>-260: menurect.left-=1 else: if menurect.left>-260: menurect.left-=1 for e in pygame.event.get(): if e.type==QUIT: exit() if menurect.right<100: if e.type==MOUSEBUTTONUP: if e.button==1: to_remove = [i for i in instances if i.rect.collidepoint(placepos)] for i in to_remove: instances.remove(i) if not to_remove: instances.append(Block(placepos[0],placepos[1],currentbspr)) for i in instances: screen.blit(i.sprite,i.rect) if not key[K_q]: screen.blit(curspr,placepos) screen.blit(menuspr,menurect) for item in menuitems: screen.blit(item.sprite,item.pos) if item.rect.collidepoint(mse): if pygame.mouse.get_pressed()==(1,0,0): currentbspr=item.sprite pygame.draw.rect(screen, ((255,0,0)), item, 1) pygame.display.flip()

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

  • Split vector vs matrix notation for transformation

    - by seahorse
    Some rendering engines like Ogre prefer to use a individual vector based notation for transformations like the following Split vector notation: Net Transformation is represented by Scale vector = sx, sy, sz Transformation vector = tx, ty, tz Rotation Quaternion Vector = w,x,y,z Matrix notation: There are other engines which simply use a net combined transformation matrix. What are the advantages of the first notation over the second? Also for animation interpolation does it work in the first notation that we interpolate across the individual components and use the interpolated parts to get the net transformation? Is this another advantage?

    Read the article

  • 3D terrain map with Hexagon Grids (XNA)

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). http://screencast.com/t/ebrH2g5V This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows http://screencast.com/t/w9b7qKzVJtb8 This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to the following: http :// www. youtube .com / watch?v=Ri92YkyC3fw (sorry about the youtube link, but it will only let me post 2 links in this post... same rep problem i mention below...) Thanks for any help! P.S. Sorry for not posting the images inline, I apparently don't have enough rep on this stack exchange site.

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • 3D Vector "End Point" Calculation for procedural Vector Graphics

    - by FrostFlame64
    Alright, So I need some help with some Vector Math. I've developing some game Engines that have Procedural Fractal Generation for Some Graphics, such as using Lindenmayer Systems for generating Trees and Plants. L-Systems, are drawn by using Turtle Graphics, which is a form of Vector graphics. I first created a system to draw in 2D Graphics, which works perfectly fine. But now I want to make a 3D equivalent, and I’ve run into an issue. For my 2D Version, I created a Method for quickly determining the “End Point” of a Vector-like movement. Given a starting point (X, Y), a direction (between 0 and 360 degrees), and a distance, the end point is calculated by these formulas: newX = startX + distance * Sin((PI * direction) / 180) newY = startY + distance * Cos((PI * direction) / 180) Now I need something Similarly Equivalent for performing this Calculation in 3D, But I haven’t been able to Google anything that could show me how to do this. I'm flexible enough to get whatever required information is needed for this method calculation, in any reasonable form (Vector3, Quaternion, ect). To summarize: Given a starting point/vector position in 3D space (X, Y, Z), a Direction in 3D space (Vector3, Quaternion, ect), and a Distance, I need to find the “End Point” in 3D Space. Thank you for your time and help.

    Read the article

  • Sprites rendering blurry with velocity

    - by ashes999
    After adding velocity to my game, I feel like my textures are twitching. I thought it was just my eyes, until I finally captured it in a screenshot: The one on the left is what renders in my game; the one on the right is the original sprite, pasted over. (This is a screenshot from Photoshop, zoomed in 6x.) Notice the edges are aliasing -- it looks almost like sub-pixel rendering. In fact, if I had not forced my sprites (which have position and velocity as ints) to draw using integer values, I would swear that MonoGame is drawing with floating point values. But it isn't. What could be the cause of these things appearing blurry? It doesn't happen without velocity applied. To be precise, my SpriteComponent class has a Vector2 Position field. When I call Draw, I essentially use new Vector2((int)Math.Round(this.Position.X), (int)Math.Round(this.Position.Y)) for the position. I had a bug before where even stationary objects would jitter -- that was due to me using the straight Position vector and not rounding the values to ints. If I use Floor/Ceiling instead of round, the sprite sinks/hovers (one pixel difference either way) but still draws blurry.

    Read the article

  • Calculate gears rotation for a realtime simulation

    - by nkint
    Hi I'm trying to do a game with real time simulations of gears. There is a big Gear with inside a smaller gear. I managed to draw gears with different diameters but equal size teeth, but if i try to move the smaller one inside the bigger one the movement is odd. see the animated gif. the biggest gear is in center C1 and the small in the center C2. I calculate C2 position in this way: C2.x = C1.x + C1_RADIUS-C2_RADIUS) * cos(t); C2.y = C1.y - C1_RADIUS-C2_RADIUS) * sin(t); for t that goes from 0 to TWO_PI in n steps. I apply as rotation the angle t, but maybe it is wrong and i have to calculate another rotation for get a perfect joint

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • JPEG images not loading on PlayBook (Marmalade + iwgame)

    - by Vexille
    I'm using iwgame on a test project and I was trying to render different resolutions of JPG and PNG images. Everything works fine on the Marmalade Simulator, however once I deploy the game to our PlayBook and run it, only the PNG images are shown. I have declared the images in the MKB file and on a XML file iwgame's using to load the images. I've checked the deployments folder and all images are present in the intermediatefiles/native folder. We're currently using a BlackBerry only license, so we can only test this on the PlayBook, but we do intend to get a Community license and deploy to iOS and Android devices eventually (I'm not sure if this is a problem exclusive to the PlayBook). I really don't know if this is a Marmalade or a iwgame issue. I have a different test project without iwgame and it simply won't run with jpg images (I get the error: 'Could not find handler for extension "jpg"'). While searching for a sollution, I've seen people talking about using libjpg, but I've also found that Marmalade supposedly has integrated native jpeg support (and because of that iwgame has abandoned their jpeg loading support since v0.340), so I don't know what to think. I'm currently using the most recent versions of both Marmalade and iwgame, I believe: Marmalade 6.1.2 and iwgame 0.400. Also, please let me know if there's an easier or better way to do this, such as linking libjpg or something (I'm not exactly sure how to do this). I really would appreciate some help with this, there's a huge difference in size for the images we're planning to use, from a ~500kb jpg file to a ~3.5mb png file. Thanks, guys.

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • How to convert from wav or mp3 to raw PCM [on hold]

    - by Komyg
    I am developing a game using Cocos2d-X and Marmalade SDK, and I am looking for any recommendations of programs that can convert audio files in mp3 or wav format to raw PCM 16 format. The problem is that I am using the SimpleAudioEngine class to play sounds in my game and in Marmalade it only supports files that are encoded as raw PCM 16. Unfortunately I've been having a very hard time finding a program that can do this type of conversion, so I am looking for a recommendation.

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >