Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 429/772 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • X Error of failed request: BadMatch [migrated]

    - by Andrew Grabko
    I'm trying to execute some "hello world" opengl code: #include <GL/freeglut.h> void displayCall() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); ... Some more code here glutSwapBuffers(); } int main(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); glutInitWindowSize(500, 500); glutInitWindowPosition(300, 200); glutInitContextVersion(4, 2); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutCreateWindow("Hello World!"); glutDisplayFunc(displayCall); glutMainLoop(); return 0; } As a result I get: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 128 (GLX) Minor opcode of failed request: 34 () Serial number of failed request: 39 Current serial number in output stream: 40 Here is the stack trace: fghCreateNewContext() at freeglut_window.c:737 0x7ffff7bbaa81 fgOpenWindow() at freeglut_window.c:878 0x7ffff7bbb2fb fgCreateWindow() at freeglut_structure.c:106 0x7ffff7bb9d86 glutCreateWindow() at freeglut_window.c:1,183 0x7ffff7bbb4f2 main() at AlphaTest.cpp:51 0x4007df Here is the last piece of code, after witch the program crashes: createContextAttribs = (CreateContextAttribsProc) fghGetProcAddress("glXCreateContextAttribsARB" ); if ( createContextAttribs == NULL ) { fgError( "glXCreateContextAttribsARB not found" ); } context = createContextAttribs( dpy, config, share_list, direct, attributes ); "glXCreateContextAttribsARB" address is obtained successfully, but the program crashes on its invocation. If I specify OpenGL version less than 4.2 in "glutInitContextVersion()" program runs without errors. Here is my glxinfo's OpelGL version: OpenGL version string: 4.2.0 NVIDIA 285.05.09 I would be very appreciate any further ideas.

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • DX10 sprite and pixel shader

    - by Alex Farber
    I am using ID3DX10Sprite to draw 2D image on the screen. 3D scene contains only one textured sprite placed over the whole window area. Render method looks like this: m_pDevice-ClearRenderTargetView(...); m_pSprite-Begin(D3DX10_SPRITE_SORT_TEXTURE); m_pSprite-DrawSpritesImmediate(&m_SpriteDefinition, 1, 0, 0); m_pSprite-End(); Now I want to make some transformations with the sprite texture in a shader. Currently the program doesn't work with shader. How it is possible to add pixel shader to the program with this structure? Inside the shader, I need to set all colors equal to red, and multiply pixel values by some coefficient. Something like this: float4 TexturePixelShader(PixelInputType input) : SV_Target { float4 textureColor; textureColor = shaderTexture.Sample(SampleType, input.tex); textureColor.x = textureColor.x * coefficient; textureColor.y = textureColor.x; textureColor.z = textureColor.x; return textureColor; }

    Read the article

  • Ease Rotate RigidBody2D toward arbitrary angle

    - by Plastic Sturgeon
    I'm trying to make a rigidbody2D circle return to an orientation after a collision. But there is a weird behavior I do not expect - it always orients to the same direction. This is what I call in FixedUpdate(): rotationdifference = -halfPI + rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); I would expect this would rotate 90 degrees (1/2 Pi Radians) off of the neutral axis. But it does not. In fact it performs exactly the same as: rotationdifference = rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); What is going on? How would I be able to set an angle I want it to ease towards, and then have it ease towards it when its not colliding with some other force?

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • Java : 2D Collision Detection

    - by neko
    I'm been working on 2D rectangle collision for weeks and still cannot get this problem fixed. The problem I'm having is how to adjust a player to obstacles when it collides. I'm referencing this link. The player sometime does not get adjusted to obstacles. Also, it sometimes stuck in obstacle guy after colliding. Here, the player and the obstacle are inheriting super class Sprite I can detect collision between the two rectangles and the point by ; public Point getSpriteCollision(Sprite sprite, double newX, double newY) { // set each rectangle Rectangle spriteRectA = new Rectangle( (int)getPosX(), (int)getPosY(), getWidth(), getHeight()); Rectangle spriteRectB = new Rectangle( (int)sprite.getPosX(), (int)sprite.getPosY(), sprite.getWidth(), sprite.getHeight()); // if a sprite is colliding with the other sprite if (spriteRectA.intersects(spriteRectB)){ System.out.println("Colliding"); return new Point((int)getPosX(), (int)getPosY()); } return null; } and to adjust sprites after a collision: // Update the sprite's conditions public void update() { // only the player is moving for simplicity // collision detection on x-axis (just x-axis collision detection at this moment) double newX = x + vx; // calculate the x-coordinate of sprite move Point sprite = getSpriteCollision(map.getSprite().get(1), newX, y);// collision coordinates (x,y) if (sprite == null) { // if the player is no colliding with obstacle guy x = newX; // move } else { // if collided if (vx > 0) { // if the player was moving from left to right x = (sprite.x - vx); // this works but a bit strange } else if (vx < 0) { x = (sprite.x + vx); // there's something wrong with this too } } vx=0; y+=vy; vy=0; } I think there is something wrong in update() but cannot fix it. Now I only have a collision with the player and an obstacle guy but in future, I'm planning to have more of them and making them all collide with each other. What would be a good way to do it? Thanks in advance.

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • XNA 4.0 Refresh AudioEngine, WaveBank and Others Not Found

    - by Peteyslatts
    I'm going through the Learning XNA 4.0 book, and unfortunately I installed XNA 4.0 refresh. All the code up until now has worked, with the exception of me needing to remove the Framework.Net and Framework.Storage. (As a side question, will this be problematic later?) The problem I'm having now is that in my Game1.cs file, I have imported all of the XNA.Framework libraries, and when I try and create instances of any of the following classes, an error pops up saying VisualStudio can't find them: AudiEngine, WaveBank, SoundBank, and Cue. I have googled around for a while, and the only solution I saw was to import Microsoft.Xna.Framework.Xact, but this doesn't seem to exist for me. Any help is much appreciated, Thanks Peter.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Calculate the intersection depth between a rectangle and a right triangle

    - by Celarix
    all. I'm working on a 2D platformer built in C#/XNA, and I'm having a lot of problems calculating the intersection depth between a standard rectangle (used for sprites) and a right triangle (used for sloping tiles). Ideally, the rectangle will collide with the solid edges of the triangle, and its bottom-center point will collide with the sloped edge. I've been fighting with this for a couple of days now, and I can't make sense of it. So far, the method detects intersections (somewhat), but it reports wildly wrong depths. How does one properly calculate the depth? Is there something I'm missing? Thanks!

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • i am going to start learning to develop games, and have a very importent question

    - by levi s.
    so i am going to be starting to start learning to develop games soon, and i have already learned the basics of java. before i really go balls out. am i making a bad choice of language? should i stop now and move to c++ or c#? will that hinder me? is java going to hinder me worse? im kinda having regrets on saying "oh hey minecraft was made in java, it must be best!" im mainly asking, what should i do?

    Read the article

  • Continuous Collision Detection Techniques

    - by Griffin
    I know there are quite a few continuous collision detection algorithms out there , but I can't find a list or summary of different 2D techniques; only tutorials on specific algorithms. What techniques are out there for calculating when different 2D bodies will collide and what are the advantages / disadvantages of each? I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if an algorithm breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • Java Animation Memory Overload [on hold]

    - by user2425429
    I need a way to reduce the memory usage of these programs while keeping the functionality. Every time I add 50 milliseconds or so to the set&display loop in AnimationTest1, it throws an out of memory error. Here is the code I have now: import java.awt.DisplayMode; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Polygon; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Executor; import java.util.concurrent.Executors; import javax.swing.ImageIcon; public class AnimationTest1 { public static void main(String args[]) { AnimationTest1 test = new AnimationTest1(); test.run(); } private static final DisplayMode POSSIBLE_MODES[] = { new DisplayMode(800, 600, 32, 0), new DisplayMode(800, 600, 24, 0), new DisplayMode(800, 600, 16, 0), new DisplayMode(640, 480, 32, 0), new DisplayMode(640, 480, 24, 0), new DisplayMode(640, 480, 16, 0) }; private static final long DEMO_TIME = 4000; private ScreenManager screen; private Image bgImage; private Animation anim; public void loadImages() { // create animation List<Polygon> polygons=new ArrayList(); int[] x=new int[]{20,4,4,20,40,56,56,40}; int[] y=new int[]{20,32,40,44,44,40,32,20}; polygons.add(new Polygon(x,y,8)); anim = new Animation(); //# of frames long startTime = System.currentTimeMillis(); long currTimer = startTime; long elapsedTime = 0; boolean animated = false; Graphics2D g = screen.getGraphics(); int width=200; int height=200; //set&display loop while (currTimer - startTime < DEMO_TIME*2) { //draw the polygons if(!animated){ for(int j=0; j<polygons.size();j++){ for(int pos=0; pos<polygons.get(j).npoints; pos++){ polygons.get(j).xpoints[pos]+=1; } } anim.setNewPolyFrame(polygons , width , height , 64); } else{ // update animation anim.update(elapsedTime); draw(g); g.dispose(); screen.update(); try{ Thread.sleep(20); } catch(InterruptedException ie){} } if(currTimer - startTime == DEMO_TIME) animated=true; elapsedTime = System.currentTimeMillis() - currTimer; currTimer += elapsedTime; } } public void run() { screen = new ScreenManager(); try { DisplayMode displayMode = screen.findFirstCompatibleMode(POSSIBLE_MODES); screen.setFullScreen(displayMode); loadImages(); } finally { screen.restoreScreen(); } } public void draw(Graphics g) { // draw background g.drawImage(bgImage, 0, 0, null); // draw image g.drawImage(anim.getImage(), 0, 0, null); } } ScreenManager: import java.awt.Color; import java.awt.DisplayMode; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GraphicsConfiguration; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Toolkit; import java.awt.Window; import java.awt.event.KeyListener; import java.awt.event.MouseListener; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; import javax.swing.JPanel; public class ScreenManager extends JPanel { private GraphicsDevice device; /** Creates a new ScreenManager object. */ public ScreenManager() { GraphicsEnvironment environment=GraphicsEnvironment.getLocalGraphicsEnvironment(); device = environment.getDefaultScreenDevice(); setBackground(Color.white); } /** Returns a list of compatible display modes for the default device on the system. */ public DisplayMode[] getCompatibleDisplayModes() { return device.getDisplayModes(); } /** Returns the first compatible mode in a list of modes. Returns null if no modes are compatible. */ public DisplayMode findFirstCompatibleMode( DisplayMode modes[]) { DisplayMode goodModes[] = device.getDisplayModes(); for (int i = 0; i < modes.length; i++) { for (int j = 0; j < goodModes.length; j++) { if (displayModesMatch(modes[i], goodModes[j])) { return modes[i]; } } } return null; } /** Returns the current display mode. */ public DisplayMode getCurrentDisplayMode() { return device.getDisplayMode(); } /** Determines if two display modes "match". Two display modes match if they have the same resolution, bit depth, and refresh rate. The bit depth is ignored if one of the modes has a bit depth of DisplayMode.BIT_DEPTH_MULTI. Likewise, the refresh rate is ignored if one of the modes has a refresh rate of DisplayMode.REFRESH_RATE_UNKNOWN. */ public boolean displayModesMatch(DisplayMode mode1, DisplayMode mode2) { if (mode1.getWidth() != mode2.getWidth() || mode1.getHeight() != mode2.getHeight()) { return false; } if (mode1.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && mode2.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && mode1.getBitDepth() != mode2.getBitDepth()) { return false; } if (mode1.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && mode2.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && mode1.getRefreshRate() != mode2.getRefreshRate()) { return false; } return true; } /** Enters full screen mode and changes the display mode. If the specified display mode is null or not compatible with this device, or if the display mode cannot be changed on this system, the current display mode is used. <p> The display uses a BufferStrategy with 2 buffers. */ public void setFullScreen(DisplayMode displayMode) { JFrame frame = new JFrame(); frame.setUndecorated(true); frame.setIgnoreRepaint(true); frame.setResizable(true); device.setFullScreenWindow(frame); if (displayMode != null && device.isDisplayChangeSupported()) { try { device.setDisplayMode(displayMode); } catch (IllegalArgumentException ex) { } } frame.createBufferStrategy(2); Graphics g=frame.getGraphics(); g.setColor(Color.white); g.drawRect(0, 0, frame.WIDTH, frame.HEIGHT); frame.paintAll(g); g.setColor(Color.black); g.dispose(); } /** Gets the graphics context for the display. The ScreenManager uses double buffering, so applications must call update() to show any graphics drawn. <p> The application must dispose of the graphics object. */ public Graphics2D getGraphics() { Window window = device.getFullScreenWindow(); if (window != null) { BufferStrategy strategy = window.getBufferStrategy(); return (Graphics2D)strategy.getDrawGraphics(); } else { return null; } } /** Updates the display. */ public void update() { Window window = device.getFullScreenWindow(); if (window != null) { BufferStrategy strategy = window.getBufferStrategy(); if (!strategy.contentsLost()) { strategy.show(); } } // Sync the display on some systems. // (on Linux, this fixes event queue problems) Toolkit.getDefaultToolkit().sync(); } /** Returns the window currently used in full screen mode. Returns null if the device is not in full screen mode. */ public Window getFullScreenWindow() { return device.getFullScreenWindow(); } /** Returns the width of the window currently used in full screen mode. Returns 0 if the device is not in full screen mode. */ public int getWidth() { Window window = device.getFullScreenWindow(); if (window != null) { return window.getWidth(); } else { return 0; } } /** Returns the height of the window currently used in full screen mode. Returns 0 if the device is not in full screen mode. */ public int getHeight() { Window window = device.getFullScreenWindow(); if (window != null) { return window.getHeight(); } else { return 0; } } /** Restores the screen's display mode. */ public void restoreScreen() { Window window = device.getFullScreenWindow(); if (window != null) { window.dispose(); } device.setFullScreenWindow(null); } /** Creates an image compatible with the current display. */ public BufferedImage createCompatibleImage(int w, int h, int transparency) { Window window = device.getFullScreenWindow(); if (window != null) { GraphicsConfiguration gc = window.getGraphicsConfiguration(); return gc.createCompatibleImage(w, h, transparency); } return null; } } Animation: import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Polygon; import java.awt.image.BufferedImage; import java.util.ArrayList; import java.util.List; /** The Animation class manages a series of images (frames) and the amount of time to display each frame. */ public class Animation { private ArrayList frames; private int currFrameIndex; private long animTime; private long totalDuration; /** Creates a new, empty Animation. */ public Animation() { frames = new ArrayList(); totalDuration = 0; start(); } /** Adds an image to the animation with the specified duration (time to display the image). */ public synchronized void addFrame(BufferedImage image, long duration){ ScreenManager s = new ScreenManager(); totalDuration += duration; frames.add(new AnimFrame(image, totalDuration)); } /** Starts the animation over from the beginning. */ public synchronized void start() { animTime = 0; currFrameIndex = 0; } /** Updates the animation's current image (frame), if necessary. */ public synchronized void update(long elapsedTime) { if (frames.size() >= 1) { animTime += elapsedTime; /*if (animTime >= totalDuration) { animTime = animTime % totalDuration; currFrameIndex = 0; }*/ while (animTime > getFrame(0).endTime) { frames.remove(0); } } } /** Gets the Animation's current image. Returns null if this animation has no images. */ public synchronized Image getImage() { if (frames.size() > 0&&!(currFrameIndex>=frames.size())) { return getFrame(currFrameIndex).image; } else{ System.out.println("There are no frames!"); System.exit(0); } return null; } private AnimFrame getFrame(int i) { return (AnimFrame)frames.get(i); } private class AnimFrame { Image image; long endTime; public AnimFrame(Image image, long endTime) { this.image = image; this.endTime = endTime; } } public void setNewPolyFrame(List<Polygon> polys,int imagewidth,int imageheight,int time){ BufferedImage image=new BufferedImage(imagewidth, imageheight, 1); Graphics g=image.getGraphics(); for(int i=0;i<polys.size();i++){ g.drawPolygon(polys.get(i)); } addFrame(image,time); g.dispose(); } }

    Read the article

  • How to make room reflection using Cubemap

    - by MaT
    I am trying to use a cube map of the inside of a room to create some reflections on walls, ceiling and floor. But when I use the cube map, the reflected image is not correct. The point of view seems to be false. To be correct I use a different cube map for each walls, floor or ceiling. The cube map is calculated from the center of the plane looking at the room. Are there specialized techniques to achieve such effect ? Thanks a lot !

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >