Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 517/1021 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Box2D `ApplyLinearImpulse` is not working whereas `SetLinearVelocity` works

    - by Narek
    I need to mimic jumping behavior for the player in my game. Player consists of two fixtures with circle and rectangle shapes. Rectangle I use to detect ground and it is a sensor. Is some point for jumping I do this: float impulseY = body->GetMass() * PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180); body->ApplyLinearImpulse(b2Vec2(0, impulseY), body->GetWorldCenter(), true); and player does not jump. But when I do this: body->SetLinearVelocity(b2Vec2(0, PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180))); my player jumps. Also when I change the rectangle shape to be normal (not sensor) shape, its works again. Why? Just in case here are the parameters of my rectangular sensor: b2PolygonShape boxShape; boxShape.SetAsBox(width * 0.5/2/PTM_RATIO, height * 0.2/2/PTM_RATIO, b2Vec2(0, -height * 0.4 /PTM_RATIO), 0); b2FixtureDef boxFixtureDef; boxFixtureDef.friction = 0; boxFixtureDef.restitution = 0; boxFixtureDef.density = 1; boxFixtureDef.isSensor = true; boxFixtureDef.userData = static_cast<void*>(PLAYER_GROUP);

    Read the article

  • XNA 4: RenderTarget2D textures getting transparent on fullscreen

    - by Shashwat
    I'm generating a Texture2D object using RenderTarget2D as in the following code public static Texture2D GetTextTexture(string text, Vector2 position, SpriteFont font, Color foreColor, Color backColor, Texture2D background=null) { int width = (int)font.MeasureString(text).X; int height = (int)font.MeasureString(text).Y; GraphicsDevice device = Settings.game.GraphicsDevice; SpriteBatch spriteBatch = Settings.game.spriteBatch; RenderTarget2D renderTarget = new RenderTarget2D(device, width, height, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, device.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents); device.SetRenderTarget(renderTarget); device.Clear(backColor); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque); if (background != null) spriteBatch.Draw(background, new Rectangle(0, 0, 70, 70), Color.White); spriteBatch.End(); spriteBatch.Begin(); spriteBatch.DrawString(font, text, position, foreColor, 0, new Vector2(0), 0.8f, SpriteEffects.None, 0); spriteBatch.End(); device.SetRenderTarget(null); ResetGraphicsDeviceSettings(); return (Texture2D)renderTarget; } It's working all fine. But when I ToggleFullScreen() (and vice-versa), the previous textures are getting transparent. However, the new textures after that are being generated correctly. What can be the reason for this?

    Read the article

  • FPS camera specification

    - by user1095108
    I remember I once composed a FPS viewing transformation, as a composition of 3 rotations, each with an angle as a parameter. The first angle specified the left/right rotation around the y-axis, the second angle the up/down rotation around the x-axis, and the third around the z-axis. The viewing transformation was therefore specified by 3 angles. Naturally, this transformation had a gimbal lock, depending in what order the transformation were performed. What should I look at to derive my viewing transformation without the gimbal lock? I know the "lookAt" method already, but I consider that cumbersome. EDIT: MY first guess is to do the first 2 transformations to get a viewing direction and then the axis-angle rotation on this axis.

    Read the article

  • Trouble with Collada bones

    - by KyleT
    I have a Collada file with a rigged mesh. I've read the node tags in the library_visual_scenes tag and extracted the matrix for each node and stored everything in a hierarchical bone structure. My Matrix container is "row major", so I'd store the first float of a matrix tag in the 1st row, 1st column, the second in the 1st row, 2nd column, etc. From what I gather this is the Bind Pose Matrix. After that I went through the tag and extracted the float array in the source tag of the skin tag of the controller for the mesh. I stored each matrix from this float array in their corresponding Bone as the Inverse Bind Matrix. I also extracted the bind-shape-matrix and stored it. Now I'd like to draw the skeleton with OpenGL to see if everything is working correctly before I go about skinning. I iterate once over my bones and multiply a bone's Bind Pose Matrix by it's parents and store that. After that I iterate again over the bones and multiply the result of the previous matrix multiplication by the Inverse Bind Matrix and then by the Bind Shape Matrix. The results look something like this: [0.2, 9.2, 5.8, 1.2 ] [4.6, -3.3, -0.2, -0.1 ] [-1.8, 0.2, -4.2, -3.9 ] [0, 0, 0, 1 ] I've had to go to various sources to get the little understanding of Collada I have and books about 3d transform matricies can get pretty intense. I've hit a brick wall and if you could please read through this and see if there is something I'm doing wrong, and how I'd go about getting an X,Y,Z to draw a point for each of these joints once I've calculated the final transform, I'd really appreciate it.

    Read the article

  • Translate along local axis

    - by Aaron
    I have an object with a position matrix and a rotation matrix (derived from a quaternion, but I digress). I'm able to translate this object along world-relative vectors, but I'm trying to figure out how to translate it along local-relative vectors. So if the object is tilted 45 degrees around its Z-axis the vector (1, 0, 0) would make it move to the upper right. For world-space translations I simply turn the movement vector into a matrix and multiply it by the position matrix: position_mat = translation_mat * position_mat. For local-space translations I'd think I'd have to use the rotation matrix into that formula, but I see the object spin around instead when I apply a translation over time no matter where I multiply the rotation matrix.

    Read the article

  • How to set sprite source coordinates?

    - by ChaosDev
    I am creating own sprite drawer with DX11 on C++. Works fine but I dont know how to apply source rectangle to texture coordinates of rendering surface(for animation sprite sheets) //source = (0,0,32,64); //RECT D3DXVECTOR2 t0 = D3DXVECTOR2( 1.0f, 0.0f); D3DXVECTOR2 t1 = D3DXVECTOR2( 1.0f, 1.0f); D3DXVECTOR2 t2 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t3 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t4 = D3DXVECTOR2( 0.0f, 0.0f); D3DXVECTOR2 t5 = D3DXVECTOR2( 1.0f, 0.0f); VertexPositionColorTexture vertices[] = { { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t0}, { D3DXVECTOR3( dest.left+dest.right, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t1}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t2}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t3}, { D3DXVECTOR3( dest.left , dest.top, z),D3DXVECTOR4(1,1,1,1), t4}, { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t5}, };

    Read the article

  • Circular class dependency

    - by shad0w
    Is it bad design to have 2 classes which need each other? I'm writing a small game in which I have a GameEngine class which has got a few GameState objects. To access several rendering methods, these GameState objects also need to know the GameEngine class - so it's a circular dependency. Would you call this bad design? I am just asking, because I am not quite sure and at this time I am still able to refactor these things.

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • Building (simple) stellar systems

    - by space borg
    hi I'm currently looking at how to simulate easily some stellar systems (meaning some central stars and then some planets with maybe satellites), in order to allow later some space based strategy game (hence with space ships moving around). This should all be based around time (so the state of each system differs through time) I'm quite struggling with the math behind this topic, like for example: - ellipse related math, - creating the path from planet A to B having time in mind (respective positions will change over time)... Do you know of any resources for that ? I wouldn't mind even buying books about it... thanks in advance best space borg side note: how to display all this stuff isn't a matter at this point in time, I'll simple plans for that (basically sticking to 2D and a "high level view" with no space ships/planets details, just markers)

    Read the article

  • Models from 3ds max lose their transformations when input into XNA

    - by jacobian
    I am making models in 3ds max. However when I export them to .fbx format and then input them into XNA, they lose their scaling. -It is most likely something to do with not using the transforms from the model correctly, is the following code correct -using xna 3.0 Matrix[] transforms=new Matrix[playerModel.Meshes.Count]; playerModel.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. int count = 0; foreach (ModelMesh mesh in playerModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = transforms[count]* Matrix.CreateScale(scale) * Matrix.CreateRotationX((float)MathHelper.ToRadians(rx)) * Matrix.CreateRotationY((float)MathHelper.ToRadians(ry)) * Matrix.CreateRotationZ((float)MathHelper.ToRadians(rz))* Matrix.CreateTranslation(position); effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } count++; mesh.Draw(); }

    Read the article

  • Video playback in games - formats & decoding

    - by snake5
    What free / non-restrictive open-source solutions (not GPL) are available for decoding game videos? The requirements are simple: a relatively easy to use C API encoded files must be quite small there must be an application that converts videos from any format (whatever codec is installed on Windows or equivalent amount of internally decoded formats) decoding has to happen fairly quickly bonus points go to file formats that are popular / actively supported and developed

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • Java chunk negative number problem

    - by user1990950
    I've got a tile based map, which is divided in chunks. I got a method, which puts tiles in this map and with positive numbers it's working. But when using negative numbers it wont work. This is my setTile method: public static void setTile(int x, int y, Tile tile) { int chunkX = x / Chunk.CHUNK_SIZE, chunkY = y / Chunk.CHUNK_SIZE; IntPair intPair = new IntPair(chunkX, chunkY); world.put(intPair, new Chunk(chunkX, chunkY)); world.get(intPair).setTile(x - chunkX * Chunk.CHUNK_SIZE, y - chunkY * Chunk.CHUNK_SIZE, tile); } This is the setTile method in the chunk class (CHUNK_SIZE is a constant with the value 64): public void setTile(int x, int y, Tile t) { if (x >= 0 && x < CHUNK_SIZE && y >= 0 && y < CHUNK_SIZE) tiles[x][y] = t; } What's wrong with my code?

    Read the article

  • Updating "Inactive" Chunks

    - by Conner Bryan
    In my game, the only chunks (4x4 areas of tiles) in memory are the ones that the player is in. However, chunks need to have updates applied to them over time. A (likely) well-known example would be MineCraft: even if the player isn't in a chunk, the wheat still needs to grow over time. My current solution is to call a method and pass in the time since the chunk was active.. but what if the chunk depends on nearby chunks for information, i.e. vines spreading or similar? Is there any reasonable solutions to this problem, or should I simply not depend on nearby chunks?

    Read the article

  • How can I imitate interaction and movement in Diablo II?

    - by user422318
    I'm prototyping a simple browser-based game. It's played from a top down perspective on a 2d canvas. You left-click on a point on the map, and your character will begin walking to it. If you click on a different point on the map, then your character will begin walking to the new point. It's similar to Diablo II: http://www.youtube.com/watch?v=EvDKt-To6K0&feature=related How can I best imitate this movement system for a player? Ideas... Track current coords and target coords If target coords are exactly up, left, right, or down, then increment appropriate direction until you get there Implied else: target coords are in a quadrant. To make this movement look natural, character will have to move diagonally. For example, pretend the target is to the northeast. For each game frame, alternate incrementing current coordinates in the north and then east directions.

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • How to Point sprite's direction towards Mouse or an Object [duplicate]

    - by Irfan Dahir
    This question already has an answer here: Rotating To Face a Point 1 answer I need some help with rotating sprites towards the mouse. I'm currently using the library allegro 5.XX. The rotation of the sprite works but it's constantly inaccurate. It's always a few angles off from the mouse to the left. Can anyone please help me with this? Thank you. P.S I got help with the rotating function from here: http://www.gamefromscratch.com/post/2012/11/18/GameDev-math-recipes-Rotating-to-face-a-point.aspx Although it's by javascript, the maths function is the same. And also, by placing: if(angle < 0) { angle = 360 - (-angle); } doesn't fix it. The Code: #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include "math.h" int main(void) { int width = 640; int height = 480; bool exit = false; int shipW = 0; int shipH = 0; ALLEGRO_DISPLAY *display = NULL; ALLEGRO_EVENT_QUEUE *event_queue = NULL; ALLEGRO_BITMAP *ship = NULL; if(!al_init()) return -1; display = al_create_display(width, height); if(!display) return -1; al_install_keyboard(); al_install_mouse(); al_init_image_addon(); al_set_new_bitmap_flags(ALLEGRO_MIN_LINEAR | ALLEGRO_MAG_LINEAR); //smoother rotate ship = al_load_bitmap("ship.bmp"); shipH = al_get_bitmap_height(ship); shipW = al_get_bitmap_width(ship); int shipx = width/2 - shipW/2; int shipy = height/2 - shipH/2; int mx = width/2; int my = height/2; al_set_mouse_xy(display, mx, my); event_queue = al_create_event_queue(); al_register_event_source(event_queue, al_get_mouse_event_source()); al_register_event_source(event_queue, al_get_keyboard_event_source()); //al_hide_mouse_cursor(display); float angle; while(!exit) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: exit = true; break; /*case ALLEGRO_KEY_LEFT: degree -= 10; break; case ALLEGRO_KEY_RIGHT: degree += 10; break;*/ case ALLEGRO_KEY_W: shipy -=10; break; case ALLEGRO_KEY_S: shipy +=10; break; case ALLEGRO_KEY_A: shipx -=10; break; case ALLEGRO_KEY_D: shipx += 10; break; } }else if(ev.type == ALLEGRO_EVENT_MOUSE_AXES) { mx = ev.mouse.x; my = ev.mouse.y; angle = atan2(my - shipy, mx - shipx); } // al_draw_bitmap(ship,shipx, shipy, 0); //al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy, degree * 3.142/180,0); al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy,angle, 0); //I directly placed the angle because the allegro library calculates radians, and if i multiplied it by 180/3. 142 the rotation would go hawire, not would, it actually did. al_flip_display(); al_clear_to_color(al_map_rgb(0,0,0)); } al_destroy_bitmap(ship); al_destroy_event_queue(event_queue); al_destroy_display(display); return 0; } EDIT: This was marked duplicate by a moderator. I'd like to say that this isn't the same as that. I'm a total beginner at game programming, I had a view at that other topic and I had difficulty understanding it. Please understand this, thank you. :/ Also, while I was making a print of what the angle is I got this... Here is a screenshot:http://img34.imageshack.us/img34/7396/fzuq.jpg Which is weird because aren't angles supposed to be 360 degrees only?

    Read the article

  • Whole continent simulation [on hold]

    - by user2309021
    Let's suppose I am planning to create a simulation of an entire continent at some point in the past (let's say, around 0 A.D). Is it feasible to spawn a hundred million actors that interact with each other and their environments? Having them reproduce, extract resources, etc? The fact is that I actually want to create a simulation that allows me to zoom in from a view of the entire continent up to a single village, and interact with it. (Think as if you could keep zooming in the campaign map of any Total War game and the transition to the battle map was seamless, not a change of the "game mode"). By the way, I have never made a game in my entire life (I have programmed normal desktop applications, though), so I am really having trouble wrapping my head around how to implement such a thing. Even while thinking about how to implement a simple population simulator, without a graphical interface, I think that the O(n) complexity of traversing an array and telling all people to get one year older each time the program ticks is kind of stupid. Any kind help would be greatly appreciated :) EDIT: After being put on hold, I shall specify a question. How would you implement a simulation of all basic human dynamics (reproduction, resource consumption) in an entire continent (with millions of people)?

    Read the article

  • Are there any OpenGL ES 2.0 examples for JOGL?

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Soccer Game only with National Team names (country names) what about player names? [duplicate]

    - by nightkarnation
    This question already has an answer here: Legal issues around using real players names and team emblems in an open source game 2 answers Ok...this question hasn't been asked before, its very similar to some, but here's the difference: I am making a soccer/football simulator game, that only has national teams (with no official logos) just the country names and flags. Now, my doubt is the following...can I use real player names (that play or played on that national team?) From what I understand if I use a player name linked to a club like Barcelona FC (not a national team) I need the right from the club and the association that club is linked to, right? But If I am only linking the name just to a country...I might just need the permission of the actual player (that I am using his name) and not any other associations, correct? Thanks a lot in advance! Cheers, Diego.

    Read the article

  • Android Loading Screen: How do I go about using a stack to load elements, and the option of incrementing the size counter?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >