Search Results

Search found 12894 results on 516 pages for 'game kit'.

Page 294/516 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • Smooth vector based jump

    - by Esa
    I started working on Wolfire's mathematics tutorials. I got the jumping working well using a step by step system, where you press a button and the cube moves to the next point on the jumping curve. Then I tried making the jumping happen during a set time period e.g the jump starts and lands within 1.5 seconds. I tried the same system I used for the step by step implementation, but it happens instantly. After some googling I found that Time.deltatime should be used, but I could not figure how. Below is my current jumping code, which makes the jump happen instantly. while (transform.position.y > 0) { modifiedJumperVelocity -= jumperDrag; transform.position += new Vector3(modifiedJumperVelocity.x, modifiedJumperVelocity.y, 0); } Where modifiedJumperVelocity is starting vector minus the jumper drag. JumperDrag is the value that is substracted from the modifiedJumperVelocity during each step of the jump. Below is an image of the jumping curve:

    Read the article

  • OpenGL Learning Material (that's up to date)

    - by Sauron
    So im sure there are topics on this, but alot of them list older material. And the last book: http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&qid=1346116133&sr=8-1&keywords=opengl REALLY REALLY disappointed me. I DO NOT want to use someone else's library to learn this stuff, that bothers me SOOO much. So I was hoping there was a newer book that goes into detail, and doesn't use some sort of library "Hiding" everything from you. Or should I just look at older material? If so....anything thats not "too" out of date. Terrain tutorials are a plus (that's kinda my "goal"). Thanks

    Read the article

  • HedgeWar code confusion

    - by BluFire
    I looked at an open source project(HedgeWars) that was built using many programming languages such as C++ and Java. While I was looking through the code, I couldn't help noticing that all the math and physics were gone from the Java code. HedgeWars I imported the project file called "SDL-android-project" which was a sub folder to "android build" and project files. My question is where is all the math and physics inside the code? Do I have to look at the C++ code in order to see it? I think Hedgewars was originally programmed in C++ but the files are confusing be because of its size and the fact that it has several programming languages inside.

    Read the article

  • How can I do Mouse Selection In OpenGL 3.0?

    - by NoobScratcher
    Hello I'm pretty good programmer I've made my own 2D games in SDL and made a gui in 3D using Old OpenGL and Modern OpenGL but.. I'm having problems with trying to click 3D models with opengl I have no idea what to do too be honest. Do I read the area that I've clicked? or what do I do? 100% shore this has been asked before but I just don't know what to do...?? using : OpenGL 3.0 WIN32 API C++

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • Quake 3 Bot Programming Example

    - by Manni
    I would like to implement an intelligent bot for Quake-3. I downloaded the and built the code successfully under Linux. My problem is that I couldn't find any complete tutorial telling me how to build an agent; telling which files to use( as there are many files in the source code). Can you give me a website or piece of source code telling me how to start? Or something like an example source code for a bot.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Importing 3d model with multiple skeletons

    - by Sweta Dwivedi
    I have created an animated butterfly in 3ds Max and try to export it in ".fbx" format to use in XNA, however as soon as I compile, i get the following Errors: Warning 1 Multiple skeletons were found in the file. The first skeleton, named "Left.Wing" has been moved to be a child of the scene root. The other, "Right.Wing", will be ignored. Fragment identifier "Right.Wing". Error 2 Vertex is bound to bone "Right.Wing", but this bone is not present in the skeleton. Which is confusing since I have the bone Right.Wing . . and I use it to animate the butterfly I have seen a few possible solution for Blender but none for 3Ds max it would be really helpful if someone could help me out with this

    Read the article

  • Multiple Vertex Buffers per Mesh

    - by Daniel
    I've run into the situation where the size of my mesh with all its vertices and indices, is larger than the (optimal) vertex buffer object upper limit (~8MB). I was wondering if I can sub-divide the mesh across multiple vertex buffers, and somehow retain validity of the indices. Ie a triangle with a indice at the first vertex, and an indice at the last (ie in seperate VBOs). All the while maintaining this within Vertex Array Objects. My thoughts are, save myself the hassle, and for meshes (messes :P) such as this, just use the necessary size ( 8MB); which is what I do at the moment. But ideally my buffer manager (wip) at the moment is using optimal sizes; I may just have to make a special case then... Any ideas? If necessary, a simple C++ code example is appreciated. Note: I have also cross-posted this on stackoverflow, as I was not sure as to which it would be more suitable (its partly a design question).

    Read the article

  • Current state of the art for holographic displays?

    - by kamziro
    In another thread I was asking about independent arcade system development, and partly it was because I'm interested in using available 3D holographic technology for the displays. I saw one which involved a rotating mirror at very high speeds, so that's probably not going to be too feasible (I'd hate to think of what happens when something goes wrong). However, I looked around for videos from trade shows, and I found this: http://www.youtube.com/watch?v=Vl0EIngM-RA But information is scarce, and I'm led to believe it's just using mirrors and reflections, and does not convey true 3D depth like the one using rotating mirrors at high speed: http://www.youtube.com/watch?v=YKCUGQ-uo8c So yeah, does anyone know if there's something better available today?

    Read the article

  • 3D rotation matrices deform object while rotating

    - by Kevin
    I'm writing a small 3D renderer (using an orthographic projection right now). I've run into some trouble with my 3D rotation matrices. They seem to squeeze my 3D object (a box primitive) at certain angles. Here's a live demo (only tested in Google Chrome): http://dl.dropbox.com/u/109400107/3D/index.html The box is viewed from the top along the Y axis and is rotating around the X and Z axis. These are my 3 rotation matrices (Only rX and rZ are being used): var rX = new Matrix([ [1, 0, 0], [0, Math.cos(radiants), -Math.sin(radiants)], [0, Math.sin(radiants), Math.cos(radiants)] ]); var rY = new Matrix([ [Math.cos(radiants), 0, Math.sin(radiants)], [0, 1, 0], [-Math.sin(radiants), 0, Math.cos(radiants)] ]); var rZ = new Matrix([ [Math.cos(radiants), -Math.sin(radiants), 0], [Math.sin(radiants), Math.cos(radiants), 0], [0, 0, 1] ]); Before projecting the verticies I multiply them by rZ and rX like so: vert1.multiply(rZ); vert1.multiply(rX); vert2.multiply(rZ); vert2.multiply(rX); vert3.multiply(rZ); vert3.multiply(rX); The projection itself looks like this: bX = (pos.x + (vert1.x*scale)); bY = (pos.y + (vert1.z*scale)); Where "pos.x" and "pos.y" is an offset for centering the box on the screen. I just can't seem to find a solution to this and I'm still relativly new to working with Matricies. You can view the source-code of the demo page if you want to see the whole thing.

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • Drawing a random x,y grid of objects within a prespective

    - by T Reddy
    I'm wrapping my head around OpenGL ES 2.0 and I think I'm trying to do something very simple, but I think the math may be eluding me. I created a simple, flat-ish cylinder in Blender that is 2 units in diameter. I want to create an arbitrary grid of these edge to edge (think of a checker board). I'm using a 3D perspective with GLKit: CGSize size = [[self view] bounds].size; _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), size.width/size.height, 0.1f, 100.0f); So, I managed to manually get all of these cylinders drawn on the screen just fine. However, I would like to understand how I can programmatically "fit" all of these cylinders on the screen at the same time given the camera location, screen size, cylinder diameter, and the number of rows/columns. So the net effect is that for small grids (i.e., 5x5) the objects are closer to the camera, but for large grids (i.e., 30x30) the objects are farther away. In either case, all of the cylinders are visible.

    Read the article

  • Building View Matrix in Direct3D11

    - by Balls
    Am I doing it right? I converted this. m_ViewMatrix = XMMatrixLookAtLH(XMLoadFloat3(&m_Position), lookAtVector, upVector); to this one. XMVECTOR vz = XMVector3Normalize( lookAtVector - XMLoadFloat3(&m_Position) ); XMVECTOR vx = XMVector3Normalize( XMVector3Cross( upVector, vz ) ); XMVECTOR vy = XMVector3Cross( vz, vx ); m_ViewMatrix.r[0] = vx; m_ViewMatrix.r[1] = vy; m_ViewMatrix.r[2] = vz; m_ViewMatrix.r[3] = XMLoadFloat3(&m_Position); m_ViewMatrix.r[0].m128_f32[3] = 0.0f; m_ViewMatrix.r[1].m128_f32[3] = 0.0f; m_ViewMatrix.r[2].m128_f32[3] = 0.0f; m_ViewMatrix.r[3].m128_f32[3] = 1.0f; m_ViewMatrix = XMMatrixInverse( &XMMatrixDeterminant(m_ViewMatrix), m_ViewMatrix ); Everything looks fine when I run it. Another question is, I saw on this site(http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html) that he subtracted lookat from position in his vector vz. I tried it but gave me wrong view matrix. Can anyone check my code. I'm studying linear algebra right now. Sucks my course doesn't have one. Thank you, Balls

    Read the article

  • Isometric algorithm [fixed]

    - by David
    so i've been toying with isometric and i just cant get the tiles to be in the right order. im probably missing something obvious and i just cant see it... but even at the risk of looking stupid, heres my code. for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } and heres what i end up with delicious messed up map yep, bit of an issue. so if anyone could help, i'd appreciate it. edit* works now <_<

    Read the article

  • Checking for alternate keys with XNA IsKeyDown

    - by jocull
    I'm working on picking up XNA and this was a confusing point for me. KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left) || keyState.IsKeyDown(Keys.A)) { //Do stuff... } The book I'm using (Learning XNA 4.0, O'Rielly) says that this method accepts a bitwise OR series of keys, which I think should look like this... KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left | Keys.A)) { //Do stuff... } But I can't get it work. I also tried using !IsKeyUp(... | ...) as it said that all keys had to be down for it to be true, but had no luck with that either. Ideas? Thanks.

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • Generate texture for a heightmap

    - by James
    I've recently been trying to blend multiple textures based on the height at different points in a heightmap. However i've been getting poor results. I decided to backtrack and just attempt to recreate one single texture from an SDL_Surface (i'm using SDL) and just send that into opengl. I'll put my code for creating the texture and reading the colour values. It is a 24bit TGA i'm loading, and i've confirmed that the rest of my code works because i was able to send the surfaces pixels directly to my createTextureFromData function and it drew fine. struct RGBColour { RGBColour() : r(0), g(0), b(0) {} RGBColour(unsigned char red, unsigned char green, unsigned char blue) : r(red), g(green), b(blue) {} unsigned char r; unsigned char g; unsigned char b; }; // main loading code SDLSurfaceReader* reader = new SDLSurfaceReader(m_renderer); reader->readSurface("images/grass.tga"); // new texture unsigned char* newTexture = new unsigned char[reader->m_surface->w * reader->m_surface->h * 3 * reader->m_surface->w]; for (int y = 0; y < reader->m_surface->h; y++) { for (int x = 0; x < reader->m_surface->w; x += 3) { int index = (y * reader->m_surface->w) + x; RGBColour colour = reader->getColourAt(x, y); newTexture[index] = colour.r; newTexture[index + 1] = colour.g; newTexture[index + 2] = colour.b; } } unsigned int id = m_renderer->createTextureFromData(newTexture, reader->m_surface->w, reader->m_surface->h, RGB); // functions for reading pixels RGBColour SDLSurfaceReader::getColourAt(int x, int y) { Uint32 pixel; Uint8 red, green, blue; RGBColour rgb; pixel = getPixel(m_surface, x, y); SDL_LockSurface(m_surface); SDL_GetRGB(pixel, m_surface->format, &red, &green, &blue); SDL_UnlockSurface(m_surface); rgb.r = red; rgb.b = blue; rgb.g = green; return rgb; } // this function taken from SDL documentation // http://www.libsdl.org/cgi/docwiki.cgi/Introduction_to_SDL_Video#getpixel Uint32 SDLSurfaceReader::getPixel(SDL_Surface* surface, int x, int y) { int bpp = m_surface->format->BytesPerPixel; Uint8 *p = (Uint8*)m_surface->pixels + y * m_surface->pitch + x * bpp; switch (bpp) { case 1: return *p; case 2: return *(Uint16*)p; case 3: if (SDL_BYTEORDER == SDL_BIG_ENDIAN) return p[0] << 16 | p[1] << 8 | p[2]; else return p[0] | p[1] << 8 | p[2] << 16; case 4: return *(Uint32*)p; default: return 0; } } I've been stumped at this, and I need help badly! Thanks so much for any advice.

    Read the article

  • Rotating multiple points at once in 2D

    - by Deukalion
    I currently have an editor that creates shapes out of (X, Y) coordinates and then triangulate that to make up a shape of those points. What will I have to do to rotate all of those points simultaneously? Say I click the screen in my editor, it locates the point where I've clicked and if I move the mouse up or down from that point it calculates rotation on X and Y axis depending on new position relevant to first position, say I move up 10 on the Y axis it rotates that way and the same way for X. Or simply, somehow to enter rotation degree: 90, 180, 270, 360, for example. I use VertexPositionColor at the moment. What are the best algorithms or methods that I can look at to rotate multiple points in 2D at once? Also: Since this is an editor I do now want to rotate it on the Matrix, so if I want to rotate the whole shape 180 degree that's the new "position" of all the points, so that's the new rotation = 0 for example. Later on I probably will use World Matrix rotation for this, but not now.

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >