Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 435/1292 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Limit The Game Loop?

    - by user1758938
    How do I make a game go at the same speed? You probly dont understand that so here are examples: This Would Loop Too Fast while(true) { GetInput(); Render(); } This Just Wont Work, Hard To Exlplain while(true) { GetInput(); Render(); Sleep(16); } Basicly How Do I Sync It To Any FrameRate And Still Have To Input And Funtions Going At The Same Rate?

    Read the article

  • Setting up cube map texture parameters in OpenGL

    - by KaiserJohaan
    I see alot of tutorials and sources use the following code snippet when defining each face of a cube map: for (i = 0; i < 6; i++) glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, InternalFormat, size, size, 0, Format, Type, NULL); Is it safe to assume GL_TEXTURE_CUBE_MAP_POSITIVE_X + i will properly iterate the following cube map targets, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y etc?

    Read the article

  • Finding the contact point with SAT

    - by Kai
    The Separating Axis Theorem (SAT) makes it simple to determine the Minimum Translation Vector, i.e., the shortest vector that can separate two colliding objects. However, what I need is the vector that separates the objects along the vector that the penetrating object is moving (i.e. the contact point). I drew a picture to help clarify. There is one box, moving from the before to the after position. In its after position, it intersects the grey polygon. SAT can easily return the MTV, which is the red vector. I am looking to calculate the blue vector. My current solution performs a binary search between the before and after positions until the length of the blue vector is known to a certain threshold. It works but it's a very expensive calculation since the collision between shapes needs to be recalculated every loop. Is there a simpler and/or more efficient way to find the contact point vector?

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • How do people get around the Carmack's Reverse patent?

    - by Rei Miyasaka
    Apparently, Creative has a patent on Carmack's Reverse, and they successfully forced Id to modify their techniques for the source drop, as well as to include EAX in Doom 3. But Carmack's Reverse is discussed quite often and apparently it's a good choice for deferred shading, so it's presumably used in a lot of other high-budget productions too. Even though it's unlikely that Creative would go after smaller companies, I'm wondering how the bigger studios get around this problem. Do they just cross their fingers and hope Creative doesn't troll them, or do they just not use Carmack's Reverse at all?

    Read the article

  • Orthographic Projection Issue

    - by Nick
    I have a problem with my Ortho Matrix. The engine uses the perspective projection fine but for some reason the Ortho matrix is messed up. (See screenshots below). Can anyone understand what is happening here? At the min I am taking the Projection matrix * Transform (Translate, rotate, scale) and passing to the Vertex shader to multiply the Vertices by it. VIDEO Shows the same scene, rotating on the Y axis. http://youtu.be/2feiZAIM9Y0 void Matrix4f::InitOrthoProjTransform(float left, float right, float top, float bottom, float zNear, float zFar) { m[0][0] = 2 / (right - left); m[0][1] = 0; m[0][2] = 0; m[0][3] = 0; m[1][0] = 0; m[1][1] = 2 / (top - bottom); m[1][2] = 0; m[1][3] = 0; m[2][0] = 0; m[2][1] = 0; m[2][2] = -1 / (zFar - zNear); m[2][3] = 0; m[3][0] = -(right + left) / (right - left); m[3][1] = -(top + bottom) / (top - bottom); m[3][2] = -zNear / (zFar - zNear); m[3][3] = 1; } This is what happens with Ortho Matrix: This is the Perspective Matrix:

    Read the article

  • Game Engine with a real time renderer

    - by Maik Klein
    I am studying computer graphics since 3 semester and we just started with opengl. I really enjoy it and want to create my own little engine for learning purpose. I already read tons of different forum posts and saw the following engines. Panda3d, Ogre3d, NeoAxis, Irrlicht and Horde3d(graphics only). Now I don't want to use something like unity or cryengine because I want to start more lowlevel. Which of those engines is suited for realtime rendering? Something that cryengine offers - no baked lightmaps. Or at least gives me the option to add a realtime renderer?

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • Traffic estimation for a multiplayer flash game

    - by Steve Addington
    hey, i want to know if my rough traffic estimations are right, it would be for a pretty simple realtime flashgame in the style of haxball (but not as a soccer game) heres a video of it http://www.youtube.com/watch?v=z_xBdFg1RcI So here comes my estimation, i dont know if they are realistic! i hope someone can help me. consider the packet attached as a typical one sent every 200ms, its 148bytes + 64 bytes of header will make around a 200bytes packet. The server will receive 200bytes x 6 players x 5 times a sec=6000bytes/s=5.85Kbytes/s=46.9kbit/s plus he has to send all back to the players, so at this point are 94Kbit/s.The server received all the information, perform the definitive calculation and send the new position to all players, in a bigger packet of around 900bytes that have to be delivered to the others 6, which makes 900bytes x 6 players x 5 times a sec=27000bytes/s=26Kbytes/s=210kbit/s. overall that would be 26kbyte per second. thats like 130mb traffic per hour for a 6player room. but somehow i think the numbers are too high? that would be really much traffic for such a simple game. did i calculate something wrong?

    Read the article

  • Why does Farseer 2.x store temporaries as members and not on the stack? (.NET)

    - by Andrew Russell
    UPDATE: This question refers to Farseer 2.x. The newer 3.x doesn't seem to do this. I'm using Farseer Physics Engine quite extensively at the moment, and I've noticed that it seems to store a lot of temporary value types as members of the class, and not on the stack as one might expect. Here is an example from the Body class: private Vector2 _worldPositionTemp = Vector2.Zero; private Matrix _bodyMatrixTemp = Matrix.Identity; private Matrix _rotationMatrixTemp = Matrix.Identity; private Matrix _translationMatrixTemp = Matrix.Identity; public void GetBodyMatrix(out Matrix bodyMatrix) { Matrix.CreateTranslation(position.X, position.Y, 0, out _translationMatrixTemp); Matrix.CreateRotationZ(rotation, out _rotationMatrixTemp); Matrix.Multiply(ref _rotationMatrixTemp, ref _translationMatrixTemp, out bodyMatrix); } public Vector2 GetWorldPosition(Vector2 localPosition) { GetBodyMatrix(out _bodyMatrixTemp); Vector2.Transform(ref localPosition, ref _bodyMatrixTemp, out _worldPositionTemp); return _worldPositionTemp; } It looks like its a by-hand performance optimisation. But I don't see how this could possibly help performance? (If anything I think it would hurt by making objects much larger).

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • Platformer gravity where gravity is greater than tile size

    - by Sara
    I am making a simple grid-tile-based platformer with basic physics. I have 16px tiles, and after playing with gravity it seems that to get a nice quick Mario-like jump feel, the player ends up moving faster than 16px per second at the ground. The problem is that they clip through the first layer of tiles before collisions being detected. Then when I move the player to the top of the colliding tile, they move to the bottom-most tile. I have tried limiting their maximum velocity to be less than 16px but it does not look right. Are there any standard approaches to solving this? Thanks.

    Read the article

  • What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource?

    - by sebf
    I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff™ with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?

    Read the article

  • Trade offs of linking versus skinning geometry

    - by Jeff
    What are the trade offs between inherent in linking geometry to a node versus using skinned geometry? Specifically: What capabilities do you gain / lose from using each method? What are the performance impacts of doing one over the other? What are the specific situations where you would want to do one over the other? In addition, do the answers to these questions tend to be engine specific? If so, how much?

    Read the article

  • How do I import service references to Unity3D?

    - by Timothy Williams
    I'm attempting access a service reference in Unity. I need two: the SOAP framework and a separate service called ContentVault. The respective service URL's are: SOAP: http://api.microsofttranslator.com/V2/Soap.svc ContentVault: http://ioun.wizards.com/ContentVault.svc Both services import fine in to Visual Studio. I've tried everything I can think of but they won't work with Unity. I just get various errors (changing depending on which solution I'm trying out). I've attempted using svcutil to export the services as external scripts, but all I got was a bunch of using errors. I've tried converting the code to work with .NET 2.0 to no avail, I've even tried making the services in to .DLL's to no success. How could get these services working with Unity?

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Poor mobile performance when running from Eclipse

    - by Yajirobe_LOL
    So after weeks of thinking my rendering code was bad, I accidentally discovered the following: Running my game on a Nexus S From Eclipse (Debug as - Android application): 12fps From the device while still attached to USB (getting log info in Eclipse still): 24fps From the device while not attached via USB: 56fps I was wondering if anyone else has issues like this? I mean, the problem really isn't a problem since the final release build will likely have good performance, but for the time being I don't want to have to keep (un)plugging my device in and out when testing code all day long. Is there some remedy for this or does anyone have any input/advice? Thanks.

    Read the article

  • Resolution independent physics

    - by user46877
    I'm making a game like Doodlejump but don't know how to make the physics scale on multiple resolutions. I also can't find anything related to this on Google. Right now I'm scaling the game using letterboxing and tested scaling the jump height with this code: gravity = graphics.getHeight() * 0.001f; jumpVel = graphics.getHeight() * -0.04f; ... velY += gravity; y += velY; But if I test this on my smartphone or emulator with different resolutions, I always get a slightly different jump height. I know that Farseer is resolution independent. How can I replicate this in my game? Thanks in advance.

    Read the article

  • Affect movieclip scale from a .as doc to another

    - by Madcowe
    I've been working on a game following a tutorial on the internet, the game is an avoider where you have the Avatar, that has to avoid the objects that fall. The way it is made is: I have a DocumentClass which addChild's the screen you should be seeing and removeChild's the screen that you were. For example: first it loads the menuScreen, then when you press play unloads menu and loads playscreen. When you die it loads the gameoverScreen and loads the playscreen. And from the gameOverScreen you can press the SHOP button to go to the shop. From here on I'm on my own and not following any tutorials. The shop has a button that is supposed to alter the Avatar's X and Y scale to 0.5, but the problem is: how do I make that work? I tried creating a sharedObject.data.avatarSize, on the store's size button the code would be something like: sharedObject.data.avatarSize *= 0.5; And on the AvoiderGame.as, which is the most of the actual game, on the part where the avatar is created I tried putting this after it's creation: scaleX.avatar = sharedObject.data.avatarSize; scaleY.avatar = sharedObject.data.avatarSize; This did not work since it gives me the error 1009 saying can't access something that is null. I tried this before "using" the sharedObject: if( sharedObject.data.avatarSize == null ) { sharedObject.data.avatarSize = 1; } But it did not work... So now I'm not sure on what to do. I know we should reduce global variables as much as we can but how do I do it? Also, if it helps, I'm using Flash CS5 and working with AS3.0

    Read the article

  • box2d tween what am I missing

    - by philipp
    I have a Box2D project and I want to tween an kinematic body from position A, to position B. The tween function, got it from this blog: function easeInOut(t , b, c, d ){ if ( ( t /= d / 2 ) < 1){ return c/2 * t * t * t * t + b; } return -c/2 * ( (t -= 2 ) * t * t * t - 2 ) + b; } where t is the current value, b the start, c the end and d the total amount of frames (in my case). I am using the method introduced by this lesson of todd's b2d tutorials to move the body by setting its linear Velocity so here is relevant update code of the sprite: if( moveData.current == moveData.total ){ this._body.SetLinearVelocity( new b2Vec2() ); return; } var t = easeNone( moveData.current, 0, 1, moveData.total ); var step = moveData.length / moveData.total * t; var dir = moveData.direction.Copy(); //this is the line that I think might be corrected dir.Multiply( t * moveData.length * fps /moveData.total ) ; var bodyPosition = this._body.GetWorldCenter(); var idealPosition = bodyPosition.Copy(); idealPosition.Add( dir ); idealPosition.Subtract( bodyPosition.Copy() ); moveData.current++; this._body.SetLinearVelocity( idealPosition ); moveData is an Object that holds the global values of the tween, namely: current frame (int), total frames (int), the length of the total distance to travel (float) the direction vector (targetposition - bodyposition) (b2Vec2) and the start of the tween (bodyposition) (b2Vec2) Goal is to tween the body based on a fixed amount of frames: in moveData.total frames. The value of t is always between 0 and 1 and the only thing that is not working correctly is the resulting distance the body travels. I need to calculate the multiplier for the direction vector. What am I missing to make it work?? Greetings philipp

    Read the article

  • Alternative ways to make a battle system in a mobile indie game more fun and engaging

    - by Matt Beckman
    I'm developing an indie game for mobile platforms, and part of the game involves a PvP battle system (where the target player is passive). My vision is simple: the active player can select a weapon/item, then attack/use, and display the calculated outcome. I have a concept for battle modifiers that affect stats to make it more interesting, but I'm not convinced this by itself will add enough of a fun factor. I've received some inspiration from the game engine that powers Modern War/Kingdom Age/Crime City, but I want more control to make it more fun. In those games, you don't have the option to select weapons or use items, and the "battling" screen is simply 3D eye candy. Since this will be an indie game, I won't be spending $$$ on a team of professional 3D artists/animators, so my edge needs to be different. What are some alternatives to expensive eye candy that you or others have used to make a non-3D PvP game more fun and engaging? Did the alternative concepts survive the release?

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • Kepler orbit : get position on the orbit over time

    - by Artefact2
    I'm developing a space-simulation related game, and I am having some trouble implementing the movement of binary stars, like this: The two stars orbit their centroid, and their trajectories are ellipses. I basically know how to determine the angular velocity at any position, but not the angular velocity over time. So, for a given angle, I can very easily compute the stars position (cf. http://en.wikipedia.org/wiki/Orbit_equation). I'd want to get the stars position over time. The parametric equations of the ellipse works but doesn't give the correct speed : { X(t) = a×cos(t) ; Y(t) = b×sin(t) }. Is it possible, and how can it be done?

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >