Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 400/1130 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

  • Setting effects variables in XNA

    - by Badescu Alexandru
    Hello ! I am currently reading a book named "3D Graphics with XNA Game Studio 4.0" by Sean James and have some questions to ask : If i create a effect parameter named lets say SpecularPower and have in my effect a variable named SpecularPower , if i do something like effect.Parameters["SpecularPower"].SetValue(3) That wil change the SpecularPower variable in my effect ? And a second question, not regarding the book : If i have a spaceship and i've created a "boost" functionality that speeds up my spaceship, what effects should i implement to create the impresion oh high speed ? I was thinking of making everything except my spaceship blurry but i think there would be something missing . Any ideas ? Regards, Alex Badescu

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Derive a algorithm to match best position

    - by Farooq Arshed
    I have pieces in my game which have stats and cost assigned to them and they can only be placed at a certain location. Lets say I have 50 pieces. e.g. Piece1 = 100 stats, 10 cost, Position A. Piece2 = 120 stats, 5 cost, Position B. Piece3 = 500 stats, 50 cost, Position C. Piece4 = 200 stats, 25 cost, Position A. and so on.. I have a board on which 12 pieces have to be allocated and have to remain inside the board cost. e.g. A board has A,B,C ... J,K,L positions and X Cost assigned to it. I have to figure out a way to place best possible piece in the correct position and should remain within the cost specified by the board. Any help would be appreciated.

    Read the article

  • ContentManager in XNA cant find any XML

    - by user36385
    Im making a game in XNA 4 and this is the first time I'm using the Content loader to initialize a simple class with a XML file, but no matter how many guide I follow, or how simple or complicated is my XML File the ContentManager cant find the file; the Debug keep telling me: "A first chance exception of type 'Microsoft.Xna.Framework.Content.ContentLoadException' occurred in Microsoft.Xna.Framework.dll". I'm really confuse because I can load SpriteFonts and Texture2D without a problem ... I create the following XML (the most basic Xna XML): <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="System.String">Hello</Asset> </XnaContent> and I try to load it in the LoadContent method in my main class like this: System.String hello = Content.Load<System.String>("NewXmlFile"); There is something I'm doing wrong? I really appreciate your help

    Read the article

  • Axis-Aligned Bounding Boxes vs Bounding Ellipse

    - by Griffin
    Why is it that most, if not all collision detection algorithms today require each body to have an AABB for the use in the broad phase only? It seems to me like simply placing a circle at the body's centroid, and extending the radius to where the circle encompasses the entire body would be optimal. This would not need to be updated after the body rotates and broad overlap-calculation would be faster to. Correct? Bonus: Would a bounding ellipse be practical for broad phase calculations also, since it would better represent long, skinny shapes? Or would it require extensive calculations, defeating the purpose of broad-phase?

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Why is permadeath essential to a roguelike design?

    - by Gregory Weir
    Roguelikes and roguelike-likes (Spelunky, The Binding of Isaac) tend to share a number of game design elements: Procedurally generated worlds Character growth by way of new abilities and powers Permanent death I can understand why starting with permadeath as a premise would lead you to the other ideas: if you're going to be starting over a lot, you'll want variety in your experiences. But why do the first two elements imply a permadeath approach?

    Read the article

  • How do I consistently re-size my game window and elements?

    - by Milo
    In my 2D game, I have a flow layout. Inside the flow layout are tables. I have a slider that lets the user make the tables larger or smaller. This makes the background larger or smaller too. Everything should scale proportionally which means the background should stay at the same position when I make things larger, and it almost does. When the scrollbar is at 0, it does exactly this. As the scrollbar gets further down problems arise. I'll toggle the slider maybe 3 times and on the fourth time, the background jumps a little lower on the Y axis. In order to be efficient, I only start rendering the background near the parent of the flow layout. Here it is: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { int cx, cy, cw, ch; g->getClippingRect(cx,cy,cw,ch); g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight()); float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; float offsetX = m_activeTables[0]->getLocation().getX() - w; float offsetY = m_activeTables[0]->getLocation().getY() - h; int startY = childRect.getY(); if(moo) { std::cout << "S=" << startY << ","; } int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } if(moo) { std::cout << "\n"; moo = false; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } The numeric problem seems to be in the value of startY. I outputted startY figuring out its value: As you can see here, this is me only zooming in, pay attention to the final number before the next s=. You'll notice that, what should happen is, the numbers should be linear, ex: -40, -38, -36, -34, -32, -30, etc. As you'll notice, the start numbers linearly correlate ex: 62k, 64k, 66k, 68k, 70k etc.. but the end result is wrong every third or 4th time. Here is most of the resize code: void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; agui::Gui* gotGui = getGui(); float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); if(gotGui) { gotGui->toggleWidgetLocationChanged(false); } updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); if((int)(newVal + 0.5f) > (int)newVal) { newVal++; } m_vScroll->setValue(newVal); static int x = 0; x++; moo = true; //std::cout << m_vScroll->getValue() << std::endl; if(gotGui) { gotGui->toggleWidgetLocationChanged(true); } if(gotGui) { gotGui->_widgetLocationChanged(); } } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { if(getGui()) { getGui()->toggleWidgetLocationChanged(false); } m_flow->setLocation(0,-val); if(getGui()) { getGui()->toggleWidgetLocationChanged(true); getGui()->_widgetLocationChanged(); } }

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • How to fix OpenGL Co-ordinate System in SFML?

    - by Marc Alexander Reed
    My OpenGL setup is somehow configured to work like so: (-1, 1) (0, 1) (1, 1) (-1, 0) (0, 0) (1, 0) (-1, -1) (0, -1) (1, -1) How do I configure it so that it works like so: (0, 0) (SW/2, 0) (SW, 0) (0, SH/2) (SW/2, SH/2) (SW, SH/2) (0, SH) (SW/2, SH) (SW/2, SH) SW as Screen Width. SH as Screen Height. This solution would have to fix the problem of I can't translate significantly(1) on the Z axis. Depth doesn't seem to be working either. The Perspective code I'm using is that of my WORKING GLUT OpenGL code which has a cool 3d grid and camera system etc. But my OpenGL setup doesn't seem to work with SFML. Help me guys. :( Thanks in advance. :) #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <SFML/Audio.hpp> #include <SFML/Network.hpp> #include <SFML/OpenGL.hpp> #include "ResourcePath.hpp" //Mac-only #define _USE_MATH_DEFINES #include <cmath> double screen_width = 640.f; double screen_height = 480.f; int main (int argc, const char **argv) { sf::ContextSettings settings; settings.depthBits = 24; settings.stencilBits = 8; settings.antialiasingLevel = 2; sf::Window window(sf::VideoMode(screen_width, screen_height, 32), "SFML OpenGL", sf::Style::Close, settings); window.setActive(); glEnable(GL_DEPTH_TEST); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_NORMALIZE); glEnable(GL_COLOR_MATERIAL); glShadeModel(GL_SMOOTH); glViewport(0, 0, screen_width, screen_height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //glOrtho(0.0f, screen_width, screen_height, 0.0f, -100.0f, 100.0f); gluPerspective(45.0f, (double) screen_width / (double) screen_height , 0.f, 100.f); glClearColor(0.f, 0.f, 1.f, 0.f); //blue while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { switch (event.type) { case sf::Event::Closed: window.close(); break; } switch (event.key.code) { case sf::Keyboard::Escape: window.close(); break; case 'W': break; case 'S': break; case 'A': break; case 'D': break; } } glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.f, 0.f, 0.f); glPushMatrix(); glBegin(GL_QUADS); glColor3f(1.f, 0.f, 0.f); glVertex3f(-1.f, 1.f, 0.f); glColor3f(0.f, 1.f, 0.f); glVertex3f(1.f, 1.f, 0.f); glColor3f(1.f, 0.f, 1.f); glVertex3f(1.f, -1.f, 0.f); glColor3f(0.f, 0.f, 1.f); glVertex3f(-1.f, -1.f, 0.f); glEnd(); glPopMatrix(); window.display(); } return EXIT_SUCCESS; }

    Read the article

  • Best memory allocation strategy for iOS ?

    - by Mr.Gando
    Hey guys, I'm debating myself about memory allocation on iOS. I write most of my code in C++ and I really like using ObjectPools, FreeLists, etc. In order to pre-allocate a lot of the stuff that I'll be constantly "alloc/dealloc" during the course of my game, ( like particles, game entities, etc ). Still on iOS, it's not like we are developing for a console like PSP, where I can know for fact that I'll get a fixed amount of memory. iOS , will issue "memory warnings" when the system needs memory. Does anyone have some suggestions about this ? Is it too serious since the new iPod touch/iPhone 4 are carrying more RAM ? or it's still a big concern ? Thanks!

    Read the article

  • How to raycast select a scaled OBB?

    - by user3254944
    I have the OBB picking code to select an OBB with code inspired from Real time Rendering 3 and opengl-tutorial.org. I can successfully select objects that have been moved or rotated. However, I cant correctly select an object that has been scaled. The bounding box scales right, but the I can only select the object in a thin strip on its center. How do I fix the checkForHits() function to allow it to read the scaling that I passed to it in the raycast matrix? void GLWidget::selectObjRaycast() { glm::vec2 mouse = (glm::vec2(mousePos.x(), mousePos.y()) / glm::vec2(this->width(), this->height())) * 2.0f - 1.0f; mouse.y *= -1; glm::mat4 toWorld = glm::inverse(ProjectionM * ViewM); glm::vec4 from = toWorld * glm::vec4(mouse, -1.0f, 1.0f); glm::vec4 to = toWorld * glm::vec4(mouse, 1.0f, 1.0f); from /= from.w; to /= to.w; fromAABB = glm::vec3(from); toAABB = glm::normalize(glm::vec3(to - from)); checkForHits(); } void GLWidget::checkForHits() { for (int i = 0; i < myWin.myEtc->allObj.size(); ++i) //check for hits on each obj's bb { bool miss = 0; float tMin = 0.0f; float tMax = 100000.0f; glm::vec3 bbPos(myWin.myEtc->allObj[i]->raycastM[3].x, myWin.myEtc->allObj[i]->raycastM[3].y, myWin.myEtc->allObj[i]->raycastM[3].z); glm::vec3 delta = bbPos - fromAABB; for (int j = 0; j < 3; ++j) { glm::vec3 axis(myWin.myEtc->allObj[i]->raycastM[j].x, myWin.myEtc->allObj[i]->raycastM[j].y, myWin.myEtc->allObj[i]->raycastM[j].z); float e = glm::dot(axis, delta); float f = glm::dot(toAABB, axis); if (fabs(f) > 0.001f) { float t1 = (e + myWin.myEtc->allObj[i]->bbMin[j]) / f; float t2 = (e + myWin.myEtc->allObj[i]->bbMax[j]) / f; if (t1 > t2) { float w = t1; t1 = t2; t2 = w; } if (t2 < tMax) tMax = t2; if (t1 > tMin) tMin = t1; if (tMax < tMin) miss = 1; } else { if (-e + myWin.myEtc->allObj[i]->bbMin[j] > 0.0f || -e + myWin.myEtc->allObj[i]->bbMax[j] < 0.0f) miss = 1; } } if (miss == 0) { intersection_distance = tMin; myWin.myEtc->sel.push_back(myWin.myEtc->allObj[i]); myWin.myEtc->allObj[i]->highlight = myWin.myGLHelp->highlight; break; } } } void Object::render(glm::mat4 PV) { scaleM = glm::scale(glm::mat4(), s->val_3); r_quat = glm::quat(glm::radians(r->val_3)); rotationM = glm::toMat4(r_quat); translationM = glm::translate(glm::mat4(), t->val_3); transLocal1M = glm::translate(glm::mat4(), -rsPivot->val_3); transLocal2M = glm::translate(glm::mat4(), rsPivot->val_3); raycastM = translationM * transLocal2M * rotationM * scaleM * transLocal1M; // MVP = PV * translationM * transLocal2M * rotationM * scaleM * transLocal1M; }

    Read the article

  • Single and Double Jump with single button.

    - by Asad
    I want to make Single Jump on Single Tap and Double Jump on Double Tap. My problem is that if I make double Tap on ground then it’s fine but if I make first Tap on ground and second Tap in Air then Player gain more height then usual As in image 1. I want to Make my jump like in Image 2, No matter from which point user gives second Tap, player Always get a specific height. I Used both Impulse and Linear velocity to make Jump but my problem did not solved.

    Read the article

  • XNA 4.0: Problem with loading XML content files

    - by 200
    I'm new to XNA so hopefully my question isn't too silly. I'm trying to load content data from XML. My XML file is placed in my Content project (the new XNA 4 thing), called "Event1.xml": <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyNamespace.Event"> // error on this line <name>1</name> </Asset> </XnaContent> My "Event" class is placed in my main project: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; namespace MyNamespace { public class Event { public string name; } } The XML file is being called by my main game class inside the LoadContent() method: Event temp = Content.Load<Event>("Event1"); And this is the error I'm getting: There was an error while deserializing intermediate XML. Cannot find type "MyNamespace.Event" I think the problem is because at the time the XML is being evaluated, the Event class has not been established because one file is in my main project while the other is in my Content project (being a XNA 4.0 project and such). I have also tried changing the build action of the xml file from compile to content; however the program would then not be able to find the file and would give this other warning: Project item 'Event1.xml' was not built with the XNA Framework Content Pipeline. Set its Build Action property to Compile to build it. Any suggestions are appreciated. Thanks.

    Read the article

  • RenderTarget2D behavior in XNA

    - by Utkarsh Sinha
    I've been dabbling with XNA for a couple of days now. This chunk of code doesn't work as I expect. The goal is to render sprites individually and composite them on another rendertarget. P = RenderTarget2D(with RenderTargetUsage.PreserveContents) D = RenderTarget2D(with RenderTargetUsage.DiscardContents) for all sprites: graphicsDevice.SetRenderTarget(D); <draw sprite i> graphicsDevice.SetRenderTarget(P); <Draw D> graphicsDevice.SetRenderTarget(null); <Draw P> The result I get is - only the last sprite is visible. I'm sure I'm missing some piece of information about RenderTarget2D. Any hints on what that might be? Cross posted from - http://stackoverflow.com/questions/9970349/weird-rendertarget2d-behaviour

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • 2D OBB collision detection, resolving collisions?

    - by Milo
    I currently use OBBs and I have a vehicle that is a rigid body and some buildings. Here is my update() private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); vehicle.update(16.6666f / 1000.0f); ArrayList<Building> buildings = city.getBuildings(); for(Building b : buildings) { if(vehicle.getRect().overlaps(b.getRect())) { vehicle.update(-17.0f / 1000.0f); break; } } } The collision detection works well. What doesn't is how they are dealt with. My goal is simple. If the vehicle hits a building, it should stop, and never go into the building. When I apply negative torque to reverse the car should not feel buggy and move away from the building. I don't want this to look buggy. This is my rigid body class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setCenter(c.x, c.y); forces = new Vector2D(0,0); //clear forces //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { Matrix mat = new Matrix(); float[] Vector2Ds = new float[2]; Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { Matrix mat = new Matrix(); float[] Vectors = new float[2]; Vectors[0] = world.x; Vectors[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vectors); return new Vector2D(Vectors[0], Vectors[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { Vector2D tangent = new Vector2D(-worldOffset.y, worldOffset.x); return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces = Vector2D.add(forces ,worldForce); //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } } Essentially, when any rigid body hits a building it should exhibit the same behavior. How is collision solving usually done? Thanks

    Read the article

  • Loading sound in XNA without the Content Pipeline

    - by David Gouveia
    I'm working on a "Game Maker"-type of application for Windows where the user imports his own assets to be used in the game. I need to be able to load this content at runtime on the engine side. However I don't want the user to have to install anything more than the XNA runtime, so calling the content pipeline at runtime is out. For images I'm doing fine using Texture2D.FromStream. I've also noticed that XNA 4.0 added a FromStream method to the SoundEffect class but it only accepts PCM wave files. I'd like to support more than wave files though, at least MP3. Any recommendations? Perhaps some C# library that would do the decoding to PCM wave format.

    Read the article

  • Accept keyboard input when game is not in focus?

    - by Corey Ogburn
    I want to be able to control the game via keyboard while the game does not have focus... How can I do this in XNA? EDIT: I bought a tablet. I want to write a separate app to overly the screen with controls that will send keyboard input to the game. Although, it's not sending the input DIRECT to the game, it's using the method discussed in this SO question: http://stackoverflow.com/questions/6446085/emulate-held-down-key-on-keyboard To my understanding, my test app is working the way it should be but the game is not responding to this input. I originally thought that Keyboard.GetState() would get the state regardless that the game is not in focus, but that doesn't appear to be the case.

    Read the article

  • Create edges in Blender

    - by Mikey
    I've worked with 3DS Max in Uni and am trying to learn Blender. My problem is I know a lot of simple techniques from 3DS max that I'm having trouble translating into Blender. So my question is: Say I have a poly in the middle of a mesh and I want to split it in two. Simply adding an edge between two edges. This would cause a two 5gons either side. It's a simple technique I use every now and then when I want to modify geometry. It's called "Edge connect" in 3DS Max. In Blender the only edge connect method I can find is to create edge loops, not helpful when aiming at low poly iPhone games. Is there an equivalent in blender?

    Read the article

  • Oscillating Sprite Movement in XNA

    - by Nick Van Hoogenstyn
    I'm working on a 2d game and am looking to make a sprite move horizontally across the screen in XNA while oscillating vertically (basically I want the movement to look like a sin wave). Currently for movement I'm using two vectors, one for speed and one for direction. My update function for sprites just contains this: Position += direction * speed * (float)t.ElapsedGameTime.TotalSeconds; How could I utilize this setup to create the desired movement? I'm assuming I'd call Math.Sin or Math.Cos, but I'm unsure of where to start to make this sort of thing happened. My attempt looked like this: public override void Update(GameTime t) { double msElapsed = t.TotalGameTime.Milliseconds; mDirection.Y = (float)Math.Sin(msElapsed); if (mDirection.Y >= 0) mSpeed.Y = moveSpeed; else mSpeed.Y = -moveSpeed; base.Update(t, mSpeed, mDirection); } moveSpeed is just some constant positive integer. With this, the sprite simply just continuously moves downward until it's off screen. Can anyone give me some info on what I'm doing wrong here? I've never tried something like this so if I'm doing things completely wrong, let me know!

    Read the article

  • Index out of bounds, Java bukkit plugin

    - by Robby Duke
    I'm getting index out of bounds errors in my Bukkit plugin, and it's really beginning to piss me off... I for the life of me can't figure this issue out! Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 This is where I believe the code to be erroring... for(int i = 0; i <= staffOnline.size(); i++) { if(i == staffOnline.size()) { staffList = staffList + staffOnline.get(i); } else { staffList = staffList + staffOnline.get(i) + ", "; } }

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >