Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 481/1071 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • Batching dynamic sprites in OpenGL

    - by Aaron
    I'm trying to wrap my head around how batching is done in a 2D sprite-based game. My understanding is I'd get the vertices that represent each sprite I want to draw and stuff them all into a single mesh. That way I'd only need a single draw call to render everything. Does this apply when the sprites I render are different between frames, or when some sprites are moving? Because it sounds like I'd then have to recreate my batch mesh each frame, using either glDrawArrays/glDrawElements or a streaming VBO I assume. Does this sound correct?

    Read the article

  • AndEngine Foreground Sprite

    - by McGrey
    I'm developing an Android game and have some troubles: I want to add some foreground sprites, that must obstruct my player. Se the following example: Its a screenshot from "Shinobi 3". We can see the player, the enemy, the background and two foreground trees, that hide the player's arm and part of the enemy. I'm using AndEngine GLES2 Anchor Center and I am trying to add a new layer to my scene. Sprite Forest = new Sprite(getWidth() * 0.5f, textureHeightForest * 0.5f + 100, ResourcesManager.getInstance().foreground_forest_region, vbom); Entity foregroundLayer = new Entity(); foregroundLayer.attachChild(hillFurthest); attachChild(foregroundLayer); But it still shows behind my player sprite. I am trying to find something in HUD-class (it's always shown in the foreground), but got no results. Can anyone help please?

    Read the article

  • How should I account for the GC when building games with Unity?

    - by Eonil
    *As far as I know, Unity3D for iOS is based on the Mono runtime and Mono has only generational mark & sweep GC. This GC system can't avoid GC time which stops game system. Instance pooling can reduce this but not completely, because we can't control instantiation happens in the CLR's base class library. Those hidden small and frequent instances will raise un-deterministic GC time eventually. Forcing complete GC periodically will degrade performance greatly (can Mono force complete GC, actually?) So, how can I avoid this GC time when using Unity3D without huge performance degrade?

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • How do you set up PhysFS for use in a game?

    - by ThePlan
    After my recent question on GD I've been advised to use PhysFS to pack all my game data in 1 file. So I have, and the decission wasn't light, because I've tried out every library in my answers but none contained a single good tutorial whatsoever, in fact PhysFS is the poorest documented library I've ever seen. After attempting to set up PhysFS in my game I realized it's not as simple as adding the headers to the project, it appears something much more complicated, in fact after my first attempt to install PhysFS the compiler ran out of memory to display errors, it reached the critical count of 50 errors. So basically what I'm asking here is: How can I set up PhysFS on my game? I'm using Code::Blocks IDE on Windows XP SP3;

    Read the article

  • 2D Tile based Game Collision problem

    - by iNbdy
    I've been trying to program a tile based game, and I'm stuck at the collision detection. Here is my code (not the best ^^): void checkTile(Character *c, int **map) { int x1,x2,y1,y2; /* Character position in the map */ c->upY = (c->y) / TILE_SIZE; // Top left corner c->downY = (c->y + c->h) / TILE_SIZE; // Bottom left corner c->leftX = (c->x) / TILE_SIZE; // Top right corner c->rightX = (c->x + c->w) / TILE_SIZE; // Bottom right corner x1 = (c->x + 10) / TILE_SIZE; // 10px from left side point x2 = (c->x + c->w - 10) / TILE_SIZE; // 10px from right side point y1 = (c->y + 10) / TILE_SIZE; // 10px from top side point y2 = (c->y + c->h - 10) / TILE_SIZE; // 10px from bottom side point /* Top */ if (map[c->upY][x1] > 2 || map[c->upY][x2] > 2) c->topCollision = 1; else c->topCollision = 0; /* Bottom */ if ((map[c->downY][x1] > 2 || map[c->downY][x2] > 2)) c->downCollision = 1; else c->downCollision = 0; /* Left */ if (map[y1][c->leftX] > 2 || map[y2][c->leftX] > 2) c->leftCollision = 1; else c->leftCollision = 0; /* Right */ if (map[y1][c->rightX] > 2 || map[y2][c->rightX] > 2) c->rightCollision = 1; else c->rightCollision = 0; } That calculates 8 collision points My moving function is like that: void movePlayer(Character *c, int **map) { if ((c->dirX == LEFT && !c->leftCollision) || (c->dirX == RIGHT && !c->rightCollision)) c->x += c->vx; if ((c->dirY == UP && !c->topCollision) || (c->dirY == DOWN && !c->downCollision)) c->y += c->vy; checkPosition(c, map); } and the checkPosition: void checkPosition(Character *c, int **map) { checkTile(c, map); if (c->downCollision) { if (c->state != JUMPING) { c->vy = 0; c->y = (c->downY * TILE_SIZE - c->h); } } if (c->leftCollision) { c->vx = 0; c->x = (c->leftX) * TILE_SIZE + TILE_SIZE; } if (c->rightCollision) { c->vx = 0; c->x = c->rightX * TILE_SIZE - c->w; } } This works, but sometimes, when the player is landing on ground, right and left collision points become equal to 1. So it's as if there were collision coming from left or right. Does anyone know why this is doing this?

    Read the article

  • how to modify shadow mapping in "3D Graphics with XNA Game Studio 4.0"?

    - by naprox
    So I've been following the tutorials from the book Sean James's "3D Graphics with XNA Game Studio 4.0", and have been doing fine until i reached the shadow mapping part. in this book it creates point lights with a Sphere model. my first Q is how to draw a directional Light with this frame work? secondly it can do shadow mapping just for one light, how can i do shadow mapping for all or parts of the lights in the game? i just want to know how to modify this codes to do the above tasks. I've followed tutorials on MSDN and some other sites and didn't got the answer. please help me, its so urgent. and if any one wants, the complete code is here: http://www.mediafire.com/?6ct11mc1g8f891h

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • How can I select an audio output device in directshow

    - by Vibhore Tanwer
    I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem. Any help will be really helpful for me.

    Read the article

  • What economic books would you suggest for learning about economic valuation of goods and simulations thereof?

    - by Rushyo
    I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to (Addendum: somewhat) correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling? EDIT: I think I should underline that I'm not looking for optimal solutions. I'm looking to make the actors impulsive - making rudimentary decisions based on fuzzy inputs about what they care about or don't. I'm aiming to understand the problem area better not derive answers. All the textbooks I've found seem to be about real-world economics or how to solve complex theoretical problems, neither of which are terribly relevant to the actor's decision making.

    Read the article

  • Problems with SAT Collision Detection

    - by DJ AzKai
    I'm doing a project in one of my modules for college in C++ with SFML and I was hoping someone may be able to help me. I'm using a vector of squares and triangles and I am using the SAT collision detection method to see if objects collide and to make the objects respond to the collision appropriately using the MTV(minimum translation vector) Below is my code: //from the main method int main(){ // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML OpenGL"); // Create a clock for measuring time elapsed sf::Clock Clock; srand(time(0)); //prepare OpenGL surface for HSR glClearDepth(1.f); glClearColor(0.3f, 0.3f, 0.3f, 0.f); //background colour glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); //// Setup a perspective projection & Camera position glMatrixMode(GL_PROJECTION); glLoadIdentity(); //set up a 3D Perspective View volume //gluPerspective(90.f, 1.f, 1.f, 300.0f);//fov, aspect, zNear, zFar //set up a orthographic projection same size as window //this mease the vertex coordinates are in pixel space glOrtho(0,800,0,600,0,1); // use pixel coordinates // Finally, display rendered frame on screen vector<BouncingThing*> triangles; for(int i = 0; i < 10; i++) { //instantiate each triangle; triangles.push_back(new BouncingTriangle(Vector2f(rand() % 700, rand() % 500), 3)); } vector<BouncingThing*> boxes; for(int i = 0; i < 10; i++) { //instantiate each box; boxes.push_back(new BouncingBox(Vector2f(rand() % 700, rand() % 500), 4)); } CollisionDetection * b = new CollisionDetection(); // Start game loop while (App.isOpen()) { // Process events sf::Event Event; while (App.pollEvent(Event)) { // Close window : exit if (Event.type == sf::Event::Closed) App.close(); // Escape key : exit if ((Event.type == sf::Event::KeyPressed) && (Event.key.code == sf::Keyboard::Escape)) App.close(); } //Prepare for drawing // Clear color and depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); for(int i = 0; i < 10; i++) { triangles[i]->draw(); boxes[i]->draw(); triangles[i]->update(Vector2f(800,600)); boxes[i]->draw(); boxes[i]->update(Vector2f(800,600)); } for(int j = 0; j < 10; j++) { for(int i = 0; i < 10; i++) { triangles[j]->setCollision(b->CheckCollision(*(triangles[j]),*(boxes[i]))); } } for(int j = 0; j < 10; j++) { for(int i = 0; i < 10; i++) { boxes[j]->setCollision(b->CheckCollision(*(boxes[j]),*(triangles[i]))); } } for(int i = 0; i < triangles.size(); i++) { for(int j = i + 1; j < triangles.size(); j ++) { triangles[j]->setCollision(b->CheckCollision(*(triangles[j]),*(triangles[i]))); } } for(int i = 0; i < triangles.size(); i++) { for(int j = i + 1; j < triangles.size(); j ++) { boxes[j]->setCollision(b->CheckCollision(*(boxes[j]),*(boxes[i]))); } } App.display(); } return EXIT_SUCCESS; } (ignore this line) //from the BouncingThing.cpp BouncingThing::BouncingThing(Vector2f position, int noSides) : pos(position), pi(3.14), radius(3.14), nSides(noSides) { collided = false; if(nSides ==3) { Vector2f vert1 = Vector2f(-12.0f,-12.0f); Vector2f vert2 = Vector2f(0.0f, 12.0f); Vector2f vert3 = Vector2f(12.0f,-12.0f); verts.push_back(vert1); verts.push_back(vert2); verts.push_back(vert3); } else if(nSides == 4) { Vector2f vert1 = Vector2f(-12.0f,12.0f); Vector2f vert2 = Vector2f(12.0f, 12.0f); Vector2f vert3 = Vector2f(12.0f,-12.0f); Vector2f vert4 = Vector2f(-12.0f, -12.0f); verts.push_back(vert1); verts.push_back(vert2); verts.push_back(vert3); verts.push_back(vert4); } velocity.x = ((rand() % 5 + 1) / 3) + 1; velocity.y = ((rand() % 5 + 1) / 3 ) +1; } void BouncingThing::update(Vector2f screenSize) { Transform t; t.rotate(0); for(int i=0;i< verts.size(); i++) { verts[i]=t.transformPoint(verts[i]); } if(pos.x >= screenSize.x || pos.x <= 0) { velocity.x *= -1; } if(pos.y >= screenSize.y || pos.y <= 0) { velocity.y *= -1; } if(collided) { //velocity.x *= -1; //velocity.y *= -1; collided = false; } pos += velocity; } void BouncingThing::setCollision(bool x){ collided = x; } void BouncingThing::draw() { glBegin(GL_POLYGON); glColor3f(0,1,0); for(int i = 0; i < verts.size(); i++) { glVertex2f(pos.x + verts[i].x,pos.y + verts[i].y); } glEnd(); } vector<Vector2f> BouncingThing::getNormals() { vector<Vector2f> normalVerts; if(nSides == 3) { Vector2f ab = Vector2f((verts[1].x + pos.x) - (verts[0].x + pos.x), (verts[1].y + pos.y) - (verts[0].y + pos.y)); ab = flip(ab); ab.x *= -1; normalVerts.push_back(ab); Vector2f bc = Vector2f((verts[2].x + pos.x) - (verts[1].x + pos.x), (verts[2].y + pos.y) - (verts[1].y + pos.y)); bc = flip(bc); bc.x *= -1; normalVerts.push_back(bc); Vector2f ac = Vector2f((verts[2].x + pos.x) - (verts[0].x + pos.x), (verts[2].y + pos.y) - (verts[0].y + pos.y)); ac = flip(ac); ac.x *= -1; normalVerts.push_back(ac); return normalVerts; } if(nSides ==4) { Vector2f ab = Vector2f((verts[1].x + pos.x) - (verts[0].x + pos.x), (verts[1].y + pos.y) - (verts[0].y + pos.y)); ab = flip(ab); ab.x *= -1; normalVerts.push_back(ab); Vector2f bc = Vector2f((verts[2].x + pos.x) - (verts[1].x + pos.x), (verts[2].y + pos.y) - (verts[1].y + pos.y)); bc = flip(bc); bc.x *= -1; normalVerts.push_back(bc); return normalVerts; } } Vector2f BouncingThing::flip(Vector2f v){ float vyTemp = v.x; float vxTemp = v.y * -1; return Vector2f(vxTemp, vyTemp); } (Ignore this line) CollisionDetection::CollisionDetection() { } vector<float> CollisionDetection::bubbleSort(vector<float> w) { int temp; bool finished = false; while (!finished) { finished = true; for (int i = 0; i < w.size()-1; i++) { if (w[i] > w[i+1]) { temp = w[i]; w[i] = w[i+1]; w[i+1] = temp; finished=false; } } } return w; } class Vector{ public: //static int dp_count; static float dot(sf::Vector2f a,sf::Vector2f b){ //dp_count++; return a.x*b.x+a.y*b.y; } static float length(sf::Vector2f a){ return sqrt(a.x*a.x+a.y*a.y); } static Vector2f add(Vector2f a, Vector2f b) { return Vector2f(a.x + b.y, a.y + b.y); } static sf::Vector2f getNormal(sf::Vector2f a,sf::Vector2f b){ sf::Vector2f n; n=a-b; n/=Vector::length(n);//normalise float x=n.x; n.x=n.y; n.y=-x; return n; } }; bool CollisionDetection::CheckCollision(BouncingThing & x, BouncingThing & y) { vector<Vector2f> xVerts = x.getVerts(); vector<Vector2f> yVerts = y.getVerts(); vector<Vector2f> xNormals = x.getNormals(); vector<Vector2f> yNormals = y.getNormals(); int size; vector<float> xRange; vector<float> yRange; for(int j = 0; j < xNormals.size(); j++) { Vector p; for(int i = 0; i < xVerts.size(); i++) { xRange.push_back(p.dot(xNormals[j], Vector2f(xVerts[i].x, xVerts[i].x))); } for(int i = 0; i < yVerts.size(); i++) { yRange.push_back(p.dot(xNormals[j], Vector2f(yVerts[i].x , yVerts[i].y))); } yRange = bubbleSort(yRange); xRange = bubbleSort(xRange); if(xRange[xRange.size() - 1] < yRange[0] || yRange[yRange.size() - 1] < xRange[0]) { return false; } float x3 = Min(xRange[0], yRange[0]); float y3 = Max(xRange[xRange.size() - 1], yRange[yRange.size() - 1]); float length = Max(x3, y3) - Min(x3, y3); } for(int j = 0; j < yNormals.size(); j++) { Vector p; for(int i = 0; i < xVerts.size(); i++) { xRange.push_back(p.dot(yNormals[j], xVerts[i])); } for(int i = 0; i < yVerts.size(); i++) { yRange.push_back(p.dot(yNormals[j], yVerts[i])); } yRange = bubbleSort(yRange); xRange = bubbleSort(xRange); if(xRange[xRange.size() - 1] < yRange[0] || yRange[yRange.size() - 1] < xRange[0]) { return false; } } return true; } float CollisionDetection::Min(float min, float max) { if(max < min) { min = max; } else return min; } float CollisionDetection::Max(float min, float max) { if(min > max) { max = min; } else return min; } On the screen the objects will freeze for a small amount of time before moving off again. However the problem is is that when this happens there are no collisions actually happening and I would really love to find out where the flaw is in the code. If you need any more information/code please don't hesitate to ask and I'll reply as soon as possible Regards, AzKai

    Read the article

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • Make Interactive Story more Variable [on hold]

    - by Guest0343
    I'm creating an interactive story that allows users to make choices based on a story. However, it doesn't give users room to do much creatively on their own. They are bound by the script at the moment. I'm wondering if anyone can suggest any element I can add that might give users some personalization. I was thinking about maybe character editing, but that doesn't add too much. I also thought about a stats system where they can have certain attributes and stats they might earn, but I'm not sure how they might use those stats. Anything is helpful!

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the moment I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement (not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated.

    Read the article

  • How can I use WebGL to create a tile-based multi-layer scrolling platform game?

    - by Nicholas Hill
    I've found WebGL (based on OpenGL) to be a fiendish and unforgiving framework for those learning to write HTML5-based games. Despite the presence of many examples on how to get started, I'm really struggling to understand how I could simply load a bunch of images and render them to a canvas quickly using WebGL. My specific scenario involves trying to render a map using a bespoke but simple multi-layered tile engine, where each value in a three dimensional array points to the image to use for that location in the rendered image. Think "Sonic the Hedgehog" via tilesets, tiles, maps, layers, sprites etc. Can anyone enlighten me: 1) How can I load an image that I can use as a texture in WebGL? 2) How can I dynamically select an image at run time and draw it at any co-ordinate, that I also select at run time?

    Read the article

  • Reversi/Othello early-game evaluation function

    - by Vladislav Il'ushin
    I've written my own Reversi player, based on the MiniMax algorithm, with Alpha-Beta pruning, but in the first 10 moves my evaluation function is too slow. I need a good early-game evaluation function. I'm trying to do it with this matrix (corresponding to the board) which determines how favourable that square is to have: { 30, -25, 10, 5, 5, 10, -25, 30,}, {-25, -25, 1, 1, 1, 1, -25, -25,}, { 10, 1, 5, 2, 2, 5, 1, 10,}, { 5, 1, 2, 1, 1, 2, 1, 5,}, { 5, 1, 2, 1, 1, 2, 1, 5,}, { 10, 1, 5, 2, 2, 5, 1, 10,}, {-25, -25, 1, 1, 1, 1, -25, -25,}, { 30, -25, 10, 5, 5, 10, -25, 30,},}; But it doesn't work well. Have you even written an early-game evaluation function for Reversi?

    Read the article

  • Hydraulics in game

    - by Mungoid
    I'm not completely sure if this would be better in the Physics site or not as this question is more about how hydraulics should work in game as opposed to how they really work (although that is taken into account) - So I apologize if this is in the wrong place. A project we are on, we have a machine with hydraulics that are powered (They don't just look like they move something, they are the only thing moving/turning/lifting something) - However, the hydraulic extends the same speed no matter what it is pushing. So, say there is a 10 ton object attached to one end of the hydraulic and the other end is attached to a plate on the ground. In real life it takes a few seconds to build up pressure depending on how heavy the object is, but in our project the hydraulics don't care about that. It will lift a 100 ton object the same speed as a 10 ton object. We have a way to fake the hydraulic pressurizing by reducing the 'drive amount' (how fast or slow the hydraulic extends) when we sense that it is touching the ground and that does a relatively decent job but we would like to be able to take other things into account like engine speed, ratios, loads, etc. but we aren't too sure what we need to think about. I'm kinda wondering if anyone here has any experience with this and could offer some suggestions on what to take into account?

    Read the article

  • How to alter image pixels of a wild life bird?

    - by NoobScratcher
    Hello so I was hoping someone knew how to move or change color and position actual image pixels and could explain and show the code to do so. I know how to write pixels on a surface or screen-surface usigned int *ptr = static_cast <unsigned int *> (screen-pixels); int offset = y * (screen->pitch / sizeof(unsigned int)); ptr[offset + x] = color; But I don't know how to alter or manipulate a image pixel of a png image my thoughts on this was How do I get the values and locations of pixels and what do I have to write to make it happen? Then how do I actually change the values or locations of those gotten pixels and how do I make that happen? any ideas tip suggestions are also welcome! int main(int argc , char *argv[]) { SDL_Surface *Screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE); SDL_Surface *Image; Image = IMG_Load("image.png"); bool done = false; SDL_Event event; while(!done) { SDL_FillRect(Screen,NULL,(0,0,0)); SDL_BlitSurface(Image,NULL,Screen,NULL); while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: return 0; break; } } SDL_Flip(Screen); } return 0; }

    Read the article

  • What are the different ways to texture a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a Heightmap, and to texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use a perlin noise to generate a terrain and then texture it). I already learnt about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. Also I don't know how i'll draw roads or dirt paths with that. I hope you understood me despite my english! If you don't, basically, here what I want to do: How do I texture a randomly generated terrain? :) Thank you for your answers!

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >