Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 382/657 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Retrieve the coordinates of the *occluding* (closest/drawn) pixels during 3D overlap, using OpenGL?

    - by Big Rich
    Hi, Sorry if the question is not worded well, I'm a new to both 3D and OpenGL. How could I go about obtaining the 3D coordinates of the occluding object, at the point where occlusion is happening (i.e. the 'intersection' of the object in front/closest to the screen)? Just to offer a [very] rudimentary, visual, example, if you were to form an index-finger cross, with your right hand closest to your face, I'd like to know the coordinates of the part of your right finger which obscures the other finger (obviously back within the OpenGL context - no jokers ;-) ). If there is a way to find out both about the occluder (hider) and the occluded (hidden) objects in OpenGL, then that would be of great use, also. Cheers Rich

    Read the article

  • Particle system lifetimes in OpenGL ES 2

    - by user16547
    I don't know how to work with my particle's lifetimes. My design is simple: each particle has a position, a speed and a lifetime. At each frame, each particle should update its position like this: position.y = position.y + INCREMENT * speed.y However, I'm having difficulties in choosing my INCREMENT. If I set it to some sort of FRAME_COUNT, it looks fine until FRAME_COUNT has to be set back to 0. The effect will be that all particles start over at the same time, which I don't want to happen. I want my particles sort of live "independent" of each other. That's the reason I need a lifetime, but I don't know how to make use of it. I added a lifetime for each particle in the particle buffer, but I also need an individual increment that's updated on each frame, so that when PARTICLE_INCREMENT = PARTICLE_LIFETIME, each increment goes back to 0. How can I achieve something like that?

    Read the article

  • Lwjgl camera causing movement to be mirrored

    - by pangaea
    I'm having a problem in that everything is rendered and the movement is fine. However, everything seems to be mirrored. In the sense that the TriangleMob should move towards me, but it doesn't instead it mirrors my action. I move forward the TriangleMob moves backwards. I move left, it moves right. I move backwards, it moves forward. The code works if I do this glPushMatrix(); glTranslatef(-position.x, -position.y, -position.z); glCallList(objectDisplayList); glPopMatrix(); However, I'm scared this will cause a problem later on. I suppose the code works. However, shouldn't the call be glPushMatrix(); glTranslatef(position.x, position.y, position.z); glCallList(objectDisplayList); glPopMatrix(); I think the problem could be caused by how I'm doing the camera, which is this glLoadIdentity(); glRotatef(player.getRotation().x, 1.0f, 0.0f, 0.0f); glRotatef(player.getRotation().y, 0.0f, 1.0f, 0.0f); glRotatef(player.getRotation().z, 0.0f, 0.0f, 1.0f); glTranslatef(player.getPosition().x, player.getPosition().y, player.getPosition().z);

    Read the article

  • Keep Getting Syntax Error C2199?

    - by DARK3ZOOZ
    Here's my problem I'm trying to define something, but keep getting a syntax error Code: #define R_RegisterShader 0x50C8A0 int (*trap_R_RegisterShader)( const char *name, int Arg_1 ) = (int (_cdecl *)(const char *, int ))R_RegisterShader; ^^^^^^^ This last part is where I keep getting the error if you need more lines of codes, just let me know. thanks http://gyazo.com/1a47ebc12cfbd6ea72feb72c686ae84d screenshot of error

    Read the article

  • Settings object with singleton pattern

    - by axis
    I need to build an object that will have only one instance because this Object is dedicated to the storage of vital settings for my application and I would like to avoid a misuse of this type or a conflict at run-time. The most popular solution for this, according to the internet, is the Singleton pattern. But I would like to know about other ideas or solutions for this; also I would like to know if other solutions can be much more easy to grasp for an user of this hypothetical library. Thanks.

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • How to render a texture partly transparent?

    - by megamoustache
    Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting : I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you :)

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • A separate solution for types, etc?

    - by hayer
    I'm currently in progress updating some engine-code(which does not work, so it is more like creating a engine). I've decided to swap over to SFML(instead of my own crappy renderer, window manager, and audio), Box2d(since I need physics, but have none), and some small utils I've built myself. The problem is that each of the project mentioned over use different types for things like Vector2, etc. So to the question; Is it a good idea to replace box2d and SFML vectors with my own vector class? (Which is one of my better implementations) My idea then was to have a seperate .lib with all my classes that should be shared between all the projects in the solution.

    Read the article

  • is this the correct way to use glTexCoordPointer?

    - by RubyKing
    Hey all Just trying to work out how to use this function glTexCoordPointer. Here is the man pages http://www.opengl.org/sdk/docs/man/xhtml/glTexCoordPointer.xml which states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f,-1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f,-0.94f, 0.0f,1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f,1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly? like this glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast(29 * sizeof(float)) ); for the fifth element on the 2nd quad first row. any help is thankful

    Read the article

  • How to properly code in Unity? [on hold]

    - by Vincent B.
    I'm fairly new to Unity (yet I touched it and made a few proto with it) and I'd like to know how I'm supposed to work with it. I'm student in programming so I'm used to C/C++ with SDL/SFML, writing code and only using Input/Graphics/Network libs. I followed a few Unity guides and it was way more around drag & drop on scenes and a bit of scripting to activate it all, which disturbed me. So I fond a way to only use one GameObject and use a Singleton to launch code and display stuff (for 2d games at least). At the end of the day I make games not using "Instantiate" or such at all. Is it the right way ? Am I supposed to do this ? How much are your scenes populated (in a professional environment) ? When should I stop coding and start using the editor ?

    Read the article

  • How do I build a matrix to translate one set of points to another?

    - by dotminic
    I've got 3 points in space that define a triangle. I've also got a vertex buffer made up of three vertices, that also represent a triangle that I will refer to as a "model". How can I can I find the matrix M that will transform vertex in my buffer to those 3 points in space ? For example, let's say my three points A, B, C are at locations: A.x = 10, A.y = 16, A.z = 8 B.x = 12, B.y = 11, B.z = 1 C.x = 19, C.y = 12, C.z = 3 given these coordinates how can I build a matrix that will translate and rotate my model such that both triangles have the exact same world space ? That is, I want the first vertex in my triangle model to have the same coordinates as A, the second to have the same coordinates as B, and same goes for C. nb: I'm using instanced rendering so I can't just give each vertex the same position as my 3 points. I have a set of three points defining a triangle, and only three vertices in my vertex buffer.

    Read the article

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • What's involved in resetting the graphics device?

    - by Donutz
    I'm playing with XNA 4.0, VS2010. I've created a window (not maximized) and drawn some sprites. All is good until I resize the window, after which the sprites stop displaying or only partially display. I'm pretty sure it has something to do with needing to reset the device or something, but can't find any clear instructions or sample code. It's not just a case of needing to increase the preferredbackbuffer size, because even if I shrink the window I get this symptom. I've looked at the source code that I was able to get from Microsoft before they shut down XNA, but it doesn't actually explain anything. Any help or advice? If it makes any difference I'm creating DrawableGameComponents and doing my updates and drawing in their Draw/Update routines.

    Read the article

  • STL for games, yea or nay?

    - by munificent
    Every programming language has its standard library of containers, algorithms, and other helpful stuff. With languages like C#, Java, and Python, it's practically inconceivable to use the language without its standard lib. Yet, on many C++ games I've worked on, we either didn't use the STL at all, used a tiny fraction of it, or used our own implementation. It's hard to tell if that was a sound decision for our games, or one simply made out of ignorance of the STL. So... is the STL a good fit or not?

    Read the article

  • What is the standard way of using Q15 values?

    - by Alex
    To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse incorrect terminology etc. For my needs I have chosen to work in "non-standard" Q15, where I only use the upper half of the range (0.0-1.0), and 0x8000 represents 1.0 instead of -1.0. This makes it much easier to calculate things in C. But I ran into a problem with SSSE3. It has the PMULHRSW instruction which multiplies Q15 numbers, but it uses the "standard" range of Q15 is [-1,1-2?¹5], so multplying (my) 0x8000 (1.0) by 0x4000 (0.5) gives 0xC000 (-0.5), because it thinks 0x8000 is -1. This is quite annoying. What am I doing wrong? Should I keep my pixel values in the 0000-7FFF range? This kind of defeats the purpose of it being a fixed-point format. Is there a way around this? Maybe some trick? Is there some kind of definitive treatise on Q15 which discusses all this?

    Read the article

  • Split Texture2D Across Line

    - by Simie
    Background I'm using a texture as a damage map represented by an array of floats for displaying damage effects on a space ship in a process. An upside of this is being able to 'slice' ships in half, using an input damage map with a line down the middle, as shown in the diagram below. Question However, I'm encountering problems separating this split ship into two different entities. I would like to be able to split the image down the 'damage line', in a process similar to this: I don't have much idea where to start detecting the red line above. Any advice would be welcome.

    Read the article

  • What soft can create deb repository with several versions of the same package?

    - by bessarabov
    I want to create my own deb repository to store some packages. I've tried reprepro and it works fine, except one but fundamental feature. Reprepro can't store several versions of the same package in the repository. But the ability to store several versions of the same package is essential to me, so I'm asking what soft can do such a thing. Here is a piece of reprepro FAQ that shows that it can't do it: 3.1) Can I have two versions of a package in the same distribution? ------------------------------------------------------------------- Sorry, this is not possible right now, as reprepro heavily optimizes at only having one version of a package in a suite-type-component-architecture quadruple. You can have different versions in different architectures and/or components within the same suite. (Even different versions of a architecture all package in different architectures of the same suite). But within the same architecture and the same component of a distribution it is not possible.

    Read the article

  • Player position triggering teleports

    - by jSherz
    I'm developing a Minecraft plugin (bukkit) in which a server admin can create 'portals' - a small region that will teleport any players who enter it. I have the teleportation sorted and I know how I could define areas that the player's position could be tested against. This would involve an ArrayList containing the zones and then hooking the PlayerMoveEvent so that the ArrayList is searched each time for a matching portal region. Although this method would work, I doubt that it would be very efficient when 100+ players are all moving around at the same time. Is there a better way of checking a player position against a set of 'zones' / regions?

    Read the article

  • Help me choose an engine

    - by Gjorgji
    So far i've been trying to make a RTS in pygame but, i feel like 2d is not enough and pygame has me do a lot of stuff that i would not like doing. What i would like doing is working on the AI gameplay and such and not worying too much about how to display stuff,physics and the like too much. So far Unity has boo which is supposed to be similar to python i wonder if that could work. How similar is it to python should i use this? Other options as far as i can see are ogre3d python bindings and UDK. Which would best suit my needs?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >