Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 578/1169 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • How can i create sprite sheet from 3d model (3D studio max)

    - by OopsUser
    I built simple 3D model of a car, with simple animation in which it's wheels are turning. Now i want to create a sprite sheet, the only way i know how to do it, is to render manually 20 frames from the from, then combine them to a strip manually, then rotate it by 10 degrees, render 20 frames of animation again and combine them to a strip... Is there a way to do it automatically ? With out rotating the scene manually and render it and combining .. it's a lot of work, takes more time then the modelling itself... Thanks

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Trying to create a sphere in UDK on which I can stand

    - by Dave
    Trying to build a globe in UDK, but when I do (create a sphere), my player falls straight through it. How do I make a sphere that I can walk on? Every other shape (cube, cone...etc) work just fine. -- Edit: Specifically, I want to build a CSG/Brush sphere, not a mesh sphere. It appears to work just fine if I set the "sphere exptrapolation" to 1 or 2, but if I bump it up to 3 or higher, I fall right through. I literally created 2 spheres next to each other, one set at "2" and one at "3" - I can walk from the top of the "2" sphere and jump onto the "3" sphere, but I fall right through it.

    Read the article

  • How to do pixel per pixel modeling in unity3d?

    - by Kabumbus
    So generally I want to have api like pixels.addPixel3D(new Pixel3D(0xFF0000, 100, 100,100)); (color, position) where pixels is some abstraction on 3d sceen objet.So to say point cloud. It would have grate use in deep space/stars modeling... I want to set each pixel by hand (having no image base or any automatic thing)... So point is modeling something like Or look at alive flash analog here How to do such thing in unity?

    Read the article

  • How can I solve this SAT direct corner intersection edge case?

    - by ssb
    I have a working SAT implementation, but I am running into a problem where direct collisions at a corner do not work for tiled surfaces. That is, it clips on the surface when going in a certain direction because it gets hung up on one of the tiles, and so, for example, if I walk across a floor while holding both down and left, the player will stop when meeting the next shape because the player will be colliding with the right side rather than with the top of the floor tile. This illustration shows what I mean: The top block will translate right first and then up. I have checked here and here which are helpful, but this does not address what I should do in a situation where I don't have a tile-based world. My usage of the term "tile" before isn't really accurate since what I'm doing here is manually placing square obstacles next to each other, not assigning them spots on a grid. What can I do to fix this?

    Read the article

  • Can DrawIndexedPrimitives() be used for drawing a loaded model mesh-wise?

    - by Afzal
    I am using DrawIndexedPrimitives() for drawing a loaded 3D model by drawing each mesh part, but this process makes my application very slow. This is perhaps because of a very large number of vertex/index buffer data created in video memory. That is why I am looking for a way to use the same method for each model mesh instead. The problem is that I don't know how I will set the textures of that mesh. Can anyone offer me some guidance?

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • Whats a good way to do Collision with 2D Rectangles? can someone give me a tip?

    - by Javier
    using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; namespace BreakOut { class Field { public static Field generateField() { List<Block> blocks = new List<Block>(); for (int j = 0; j < BlockType.BLOCK_TYPES.Length; j++) for (int i = 0; i < (Game1.WIDTH / Block.WIDTH); i++) { Block b = new Block(BlockType.BLOCK_TYPES[j], new Vector2(i * Block.WIDTH, (Block.HEIGHT + 2) * j + 5)); blocks.Add(b); } return new Field(blocks); } List<Block> blocks; public Field(List<Block> blocks) { this.blocks = blocks; } public void Update(GameTime gameTime, Ball b) { List<Block> removals = new List<Block>(); foreach (Block o in blocks) { if (o.BoundingBox.Intersects(new Rectangle((int)b.pos.X, (int)b.pos.Y, Ball.WIDTH, Ball.HEIGHT))) //collision with blocks { removals.Add(o); } } foreach(Block o in removals) blocks.Remove(o); //removes the blocks, but i need help hitting one at a time } public void Draw(GameTime gameTime) { foreach (Block b in blocks) b.Draw(gameTime); } } } My problem is that My collision in this sucks. I'm trying to add collision with a ball and hitting against a block and then one of the blocks dissapear. The problem i'm having is: When the ball hits the block, it removes it all in one instance. Please people don't be mean and say mean answers to me, im just in highschool, still a nooby and trying to learn more c#/XNA..

    Read the article

  • Where does the light come from, using Maya/Panda3D?

    - by Aerovistae
    Total noob to Maya. Total noob to Panda3D. Planning on becoming really good at both as soon as I have free time to do so, but right now I have an assignment due in a few hours which requires this: (The part which confuses me is bolded.) Model and texture a vehicle and two different obstacles Build a scene graph in Panda with a plane, the vehicle, several copies of each of the obstacles, and (at least) a direction light Program vehicle movement, constrained to a plane (no terrain) Working headlights Vehicle collides with obstacles How do I attach a light source to a model? I'm assuming this is done in Panda3D but I'm sufficiently new to this that I wouldn't be astonished to hear it's part of the model.

    Read the article

  • What has the most efficient intersection test against an AABB tree - OBB, Cylinder or Capsule?

    - by identitycrisisuk
    I'm currently trying to find collisions in 3D between a tighter volume than an AABB and a tree of AABB volumes. I just need to know whether they are intersecting, no closest distance or collision response. An OBB, Cylinder or Capsule would all roughly fit these purposes but Cylinder and Capsule were the first thing I thought of, which I have found little information about detecting intersections online. Am I right in thinking that they would always be more complex to perform Separating Axis Tests on even though they might seem like simpler shapes? I figure by the time I get my head around SAT for curved shapes I could have done the thing with OBBs but I wanted to find out for sure.

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • Open GL polygons not displaying

    - by Darestium
    I have tried to follow nehe's opengl tutorial lesson 2. I use sfml for my window creation. The problem I have is that both the triangle and the quad don't show up on the screen: #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app, const sf::Input &input); void renderCube(sf::Window *app, sf::Clock *clock); void renderGlScene(sf::Window *app); void init(); int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 2"); app.UseVerticalSync(false); init(); while (app.IsOpened()) { processEvents(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable z-buffer and read and write glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen and the depth buffer glLoadIdentity(); // Reset the view glTranslatef(-1.5f, 0.0f, -6.0f); // Move Left 1.5 units and into the screen 6.0 glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); glTranslatef(3.0f, 0.0f, 0.0f); glBegin(GL_QUADS); // Draw a quad glVertex3f(-1.0f, 1.0f, 0.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glEnd(); } I would greatly appreciate it if someone could help me resolve my issue.

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • Can't change color of sprites in unity

    - by Aceleeon
    I would like to create a script that targets a 2d sprite "enemy" and changes their color to red (slightly opaque red if possible) when you hit tab. I have this code from a 3d tutorial hoping the transition would work. But it does not. I only get the script to cycle the enemy tags but never changes the color of the sprite. I have the code below I'm very new to coding, and any help would be FANTASTIC! HELP! hahah. TL;DR Cant get 3d color targeting to work for 2D. Check out the c#code below using UnityEngine; using System.Collections; using System.Collections.Generic; public class Targetting : MonoBehaviour { public List targets; public Transform selectedTarget; private Transform myTransform; // Use this for initialization void Start () { targets = new List(); selectedTarget = null; myTransform = transform; AddAllEnemies(); } public void AddAllEnemies() { GameObject[] go = GameObject.FindGameObjectsWithTag("Enemy"); foreach(GameObject enemy in go) AddTarget(enemy.transform); } public void AddTarget(Transform enemy) { targets.Add(enemy); } private void SortTargetsByDistance() { targets.Sort(delegate(Transform t1,Transform t2) { return Vector3.Distance(t1.position, myTransform.position).CompareTo(Vector3.Distance(t2.position, myTransform.position)); }); } private void TargetEnemy() { if(selectedTarget == null) { SortTargetsByDistance(); selectedTarget = targets[0]; } else { int index = targets.IndexOf(selectedTarget); if(index < targets.Count -1) { index++; } else { index = 0; } selectedTarget = targets[index]; } } private void SelectTarget() { selectedTarget.GetComponent().color = Color.red; } private void DeselectTarget() { selectedTarget.GetComponent().color = Color.blue; selectedTarget = null; } // Update is called once per frame void Update() { if(Input.GetKeyDown(KeyCode.Tab)) { TargetEnemy(); } } }

    Read the article

  • In OpenGl ES 2, should I allocate multiple transformation matrices?

    - by thm4ter
    In OpenGl ES 2, should I declare just one transformation matrix, and share it across all objects or should I declare a transformation matrix in each object that needs it? for clarification... something like this: public class someclass{ public static float[16] transMatrix = new float[16]; ... public static void translate(int x, int y){ //do translation here } } public class someotherclass{ ... void draw(GL10 unused){ someclass.translate(10,10); //draw } } verses something like this: public class obj1{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } } public class obj2{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } }

    Read the article

  • How to achieve selection of a tile from a tile sheet based on an ID?

    - by Bugster
    Let's say I have a tile sheet that contains 8 sprites per sheet. Each sprite is a tile of 30x30. I wrote my own custom map parser/map loader however I'm having trouble extracting a certain tile sprite from the file. I'll describe my problem better in order for everyone to understand. I wrote an enum of materials, each material has a value according to it's location relative to the tile sheet. For example void is 1, grass is 2, rock is 3, etc. So in my tile sheet they are represented as such: +---+---+---+---+---+ | 1 | 2 | 3 | 4 | 5 | +---+---+---+---+---+ Which is equivalent to: +------+-------+-------+ | void | grass | stone | +------+-------+-------+ Basically when rendering, I created a tile class, each tile has 2 coordinates: X and Y (They are calculated automatically) and a material which can be represented either as a number, either as a value (ID). When rendering, I have a vector of sprites which are all taken from 1 file called tilesheet.png, however each of them must only draw a certain portion of the tile sheet, for example say I have something like this: tile coordinateBounds(topLeftX, topLeftY, tileWidth, tileHeight); During the initialization of the map I calculate an array of tiles, and I give each of them their position, their materials based on the values in a map file and a few other variables such as collision. I need to apply the coordinateBounds to each of them according to their material value. For example if the material is grass it should only take the grass sprite from the tilesheet. I must also mention I'm using SFML, and there are no borders or spacing between the tiles.

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Texture not drawing on cubes

    - by Christian Frantz
    I can draw the cubes fine but they are just solid black besides the occasional lighting that goes on. The basic effect is being set for each cube also. public void Draw(BasicEffect effect) { foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 8, 0, 12); } } The cubes draw method. TextureEnabled is set to true in my main draw method. My texture is also loading fine. public Cube(GraphicsDevice graphicsDevice, Vector3 Position, Texture2D Texture) { device = graphicsDevice; texture = Texture; cubePosition = Position; effect = new BasicEffect(device); } The constructor seems fine too. Could this be caused by the Vector2's of my VertexPositionNormalTexture? Even if they were out of order something should still be drawn other than a black cube

    Read the article

  • How do I implement a selectable world map?

    - by Clay
    I want to have a selectable map of the world, preferably zoomable, in a cocos2d project. When I tap on a country, I want that country to be selected so that I can perform some other operations with it. It seems that the best approach would be to use a vector world map, but I'm unsure how to implement this with cocos2d. Other options include using map tiles, but it seems that still would require the implementation of country polygons for tap/click detection. Depending on user input, I want to add icons to various countries on the map. What is a good way to approach the implementation of this type of map?

    Read the article

  • HTML5 clicking objects in canvas

    - by Dave
    I have a function in my JS that gets the user's mouse click on the canvas. Now lets say I have a random shape on my canvas (really its a PNG image which is rectangular) but i don't want to include any alpha space. My issue lies with lets say i click some where and it involves a pixel of one of the images. The first issue is how do you work out the pixel location is an object on the map (and not the grass tiles behind). Secondly if i clicked said image, if each image contains its own unique information how do you process the click to load the correct data. Note I don't use libraries I personally prefer the raw method. Relying on libraries doesn't teach me much I find.

    Read the article

  • AABB vs OBB Collision Resolution jitter on corners

    - by patt4179
    I've implemented a collision library for a character who is an AABB and am resolving collisions between AABB vs AABB and AABB vs OBB. I wanted slopes for certain sections, so I've toyed around with using several OBBs to make one, and it's working great except for one glaring issue; The collision resolution on the corner of an OBB makes the player's AABB jitter up and down constantly. I've tried a few things I've thought of, but I just can't wrap my head around what's going on exactly. Here's a video of what's happening as well as my code: Here's the function to get the collision resolution (I'm likely not doing this the right way, so this may be where the issue lies): public Vector2 GetCollisionResolveAmount(RectangleCollisionObject resolvedObject, OrientedRectangleCollisionObject b) { Vector2 overlap = Vector2.Zero; LineSegment edge = GetOrientedRectangleEdge(b, 0); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.X = axisRange.Maximum - rProjection.Minimum; } else { overlap.X = rProjection.Maximum - axisRange.Minimum; overlap.X = -overlap.X; } } edge = GetOrientedRectangleEdge(b, 1); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.Y = axisRange.Maximum - rProjection.Minimum; overlap.Y = -overlap.Y; } else { overlap.Y = rProjection.Maximum - axisRange.Minimum; } } return overlap; } And here is what I'm doing to resolve it right now: if (collisionDetection.OrientedRectangleAndRectangleCollide(obb, player.PlayerCollision)) { var resolveAmount = collisionDetection.GetCollisionResolveAmount(player.PlayerCollision, obb); if (Math.Abs(resolveAmount.Y) < Math.Abs(resolveAmount.X)) { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else if (Math.Abs(resolveAmount.Y) <= 30.0f) //Catch cases where the player should be able to step over the top of something { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else { var roundedAmount = (float)Math.Floor(resolveAmount.X); player.PlayerCollision._position.X -= roundedAmount; } } Can anyone see what might be the issue here, or has anyone experienced this before that knows a possible solution? I've tried for a few days to figure this out on my own, but I'm just stumped.

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >