Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 457/1016 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Quaternion LookAt for camera

    - by Homar
    I am using the following code to rotate entities to look at points. glm::vec3 forwardVector = glm::normalize(point - position); float dot = glm::dot(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector); float rotationAngle = (float)acos(dot); glm::vec3 rotationAxis = glm::normalize(glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector)); rotation = glm::normalize(glm::quat(rotationAxis * rotationAngle)); This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders: glm::mat4 viewMatrix = glm::inverse( cameraTransform->GetTransformationMatrix() ); The orthographic projection matrix is created using glm::ortho. What's going wrong?

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

  • Pattern for performing game actions

    - by Arkiliknam
    Is there a generally accepted pattern for performing various actions within a game? A way a player can perform actions and also that an AI might perform actions, such as move, attack, self-destruct, etc. I currently have an abstract BaseAction which uses .NET generics to specify the different objects that get returned by the various actions. This is all implemented in a pattern similar to the Command, where each action is responsible for itself and does all that it needs. My reasoning for being abstract is so that I may have a single ActionHandler, and AI can just queue up different action implementing the baseAction. And the reason it is generic is so that the different actions can return result information relevant to the action (as different actions can have totally different outcomes in the game), along with some common beforeAction and afterAction implementations. So... is there a more accepted way of doing this, or does this sound alright?

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • How to move an object using X and Y coordinates in JavaScript

    - by Geroy290
    I am making a 2d game with JavaScript and HTML5 and am trying to move an image that I have drawn with JavaScript like so: //canvas var c = document.getElementById("gameCanvas"); var ctx = c.getContext("2d"); //baseball var baseball = new Image(); baseball.onload = function() { ctx.drawImage(baseball, 400, 425); }; baseball.src = "baseball2.png"; I'm not sure how I would move it though, I have seen many people seem to just type something like ballX and ballY but I don't understand where the actual x and y definition comes from. Here is my code so far: http://jsfiddle.net/xRfua/ I have a different image source but it is a local source so I couldn't include it. Thanks in a dvance for any help!

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • Checking for alternate keys with XNA IsKeyDown

    - by jocull
    I'm working on picking up XNA and this was a confusing point for me. KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left) || keyState.IsKeyDown(Keys.A)) { //Do stuff... } The book I'm using (Learning XNA 4.0, O'Rielly) says that this method accepts a bitwise OR series of keys, which I think should look like this... KeyboardState keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Left | Keys.A)) { //Do stuff... } But I can't get it work. I also tried using !IsKeyUp(... | ...) as it said that all keys had to be down for it to be true, but had no luck with that either. Ideas? Thanks.

    Read the article

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • How do games make money? What models do they use?

    - by cable729
    I'm trying to research the ways in which games make money. I want to know more about the models they use (free/premium, trial/subscription, free-to-play with micro-transactions, etc.). In addition, I want information on which models work for which games, what models are best for which age groups, etc. I've tried my best to find information, and Google hasn't turned anything up at all. I think I'll stop by my University's library and see if there's anything there. This may seem like a broad question, but I'm looking for links and titles of books, not typed-out answers.

    Read the article

  • Tetris Movement - Implementation

    - by James Brauman
    Hi gamedev, I'm developing a Tetris clone and working on the input at the moment. When I was prototyping, movement was triggered by releasing a directional key. However, in most Tetris games I've played the movement is a bit more complex. When a directional key is pressed, the shape moves one space in that direction. After a short interval, if the key is still held down, the shape starts moving in the direction continuously until the key is released. In the case of the down key being pressed, there is no pause between the initial movement and the subsequent continuous movement. I've come up with a solution, and it works well, but it's totally over-engineered. Hey, at least I can recognize when things are getting silly, right? :) public class TetrisMover { List registeredKeys; Dictionary continuousPressedTime; Dictionary totalPressedTime; Dictionary initialIntervals; Dictionary continousIntervals; Dictionary keyActions; Dictionary initialActionDone; KeyboardState currentKeyboardState; public TetrisMover() { *snip* } public void Update(GameTime gameTime) { currentKeyboardState = Keyboard.GetState(); foreach (Keys currentKey in registeredKeys) { if (currentKeyboardState.IsKeyUp(currentKey)) { continuousPressedTime[currentKey] = TimeSpan.Zero; totalPressedTime[currentKey] = TimeSpan.Zero; initialActionDone[currentKey] = false; } else { if (initialActionDone[currentKey] == false) { keyActions[currentKey](); initialActionDone[currentKey] = true; } totalPressedTime[currentKey] += gameTime.ElapsedGameTime; if (totalPressedTime[currentKey] = initialIntervals[currentKey]) { continuousPressedTime[currentKey] += gameTime.ElapsedGameTime; if (continuousPressedTime[currentKey] = continousIntervals[currentKey]) { keyActions[currentKey](); continuousPressedTime[currentKey] = TimeSpan.Zero; } } } } } public void RegisterKey(Keys key, TimeSpan initialInterval, TimeSpan continuousInterval, Action keyAction) { if (registeredKeys.Contains(key)) throw new InvalidOperationException( string.Format("The key %s is already registered.", key)); registeredKeys.Add(key); continuousPressedTime.Add(key, TimeSpan.Zero); totalPressedTime.Add(key, TimeSpan.Zero); initialIntervals.Add(key, initialInterval); continousIntervals.Add(key, continuousInterval); keyActions.Add(key, keyAction); initialActionDone.Add(key, false); } public void UnregisterKey(Keys key) { *snip* } } I'm updating it every frame, and this is how I'm registering keys for movement: tetrisMover.RegisterKey( Keys.Left, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Left); }); tetrisMover.RegisterKey( Keys.Right, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Right); }); tetrisMover.RegisterKey( Keys.Down, TimeSpan.Zero, keyHoldMovementInterval, () = { PerformGravity(); }); Issues that this doesn't address: If both left and right are held down, the shape moves back and forth really quick. If a directional key is held down and the turn finishes and the shape is replaced by a new one, the new one will move quickly in that direction instead of the little pause it is supposed to have. I could fix the issues, but I think it will make the solution even worse. How would you implement this?

    Read the article

  • How to calculate vertext normals for a mesh in Java in OpenGL ES application?

    - by alan mc
    Can some one point me to Java code ( in Java not C or C++) that calculates all the normals for all the vertices of a mesh for OpenGL ES application. I need this for lighting. Lets say I have a cube with following vertices and indices: float vertices[] = { -width, -height, -depth, // 0 width, -height, -depth, // 1 width, height, -depth, // 2 -width, height, -depth, // 3 -width, -height, depth, // 4 width, -height, depth, // 5 width, height, depth, // 6 -width, height, depth // 7 }; short indices[] = { 0, 2, 1, 0, 3, 2, 1,2,6, 6,5,1, 4,5,6, 6,7,4, 2,3,6, 6,3,7, 0,7,3, 0,4,7, 0,1,5, 0,5,4 }; In above specific example how many normals we need ?

    Read the article

  • Struct Method for Loops Problem

    - by Annalyne
    I have tried numerous times how to make a do-while loop using the float constructor for my code but it seems it does not work properly as I wanted. For summary, I am making a TBRPG in C++ and I encountered few problems. But before that, let me post my code. #include <iostream> #include <string> #include <ctime> #include <cstdlib> using namespace std; int char_level = 1; //the starting level of the character. string town; //town string town_name; //the name of the town the character is in. string charname; //holds the character's name upon the start of the game int gems = 0; //holds the value of the games the character has. const int MAX_ITEMS = 15; //max items the character can carry string inventory [MAX_ITEMS]; //the inventory of the character in game int itemnum = 0; //number of items that the character has. bool GameOver = false; //boolean intended for the game over scr. string monsterTroop [] = {"Slime", "Zombie", "Imp", "Sahaguin, Hounds, Vampire"}; //monster name float monsterTroopHealth [] = {5.0f, 10.0f, 15.0f, 20.0f, 25.0f}; // the health of the monsters int monLifeBox; //life carrier of the game's enemy troops int enemNumber; //enemy number //inventory[itemnum++] = "Sword"; class RPG_Game_Enemy { public: void enemyAppear () { srand(time(0)); enemNumber = 1+(rand()%3); if (enemNumber == 1) cout << monsterTroop[1]; //monster troop 1 else if (enemNumber == 2) cout << monsterTroop[2]; //monster troop 2 else if (enemNumber == 3) cout << monsterTroop[3]; //monster troop 3 else if (enemNumber == 4) cout << monsterTroop[4]; //monster troop 4 } void enemDefeat () { cout << "The foe has been defeated. You are victorious." << endl; } void enemyDies() { //if the enemy dies: //collapse declaration cout << "The foe vanished and you are victorious!" << endl; } }; class RPG_Scene_Battle { public: RPG_Scene_Battle(float ini_health) : health (ini_health){}; float getHealth() { return health; } void setHealth(float rpg_val){ health = rpg_val;}; private: float health; }; //---------------------------------------------------------------// // Conduct Damage for the Scene Battle's Damage //---------------------------------------------------------------// float conductDamage(RPG_Scene_Battle rpg_tr, float damage) { rpg_tr.setHealth(rpg_tr.getHealth() - damage); return rpg_tr.getHealth(); }; // ------------------------------------------------------------- // void RPG_Scene_DisplayItem () { cout << "Items: \n"; for (int i=0; i < itemnum; ++i) cout << inventory[i] <<endl; }; In this code I have so far, the problem I have is the battle scene. For example, the player battles a Ghost with 10 HP, when I use a do while loop to subtract the HP of the character and the enemy, it only deducts once in the do while. Some people said I should use a struct, but I have no idea how to make it. Is there a way someone can display a code how to implement it on my game? Edit: I made the do-while by far like this: do RPG_Scene_Battle (player, 20.0f); RPG_Scene_Battle (enemy, 10.0f); cout << "Battle starts!" <<endl; cout << "You used a blade skill and deducted 2 hit points to the enemy!" conductDamage (enemy, 2.0f); while (enemy!=0) also, I made something like this: #include <iostream> using namespace std; int gems = 0; class Entity { public: Entity(float startingHealth) : health(startingHealth){}; // initialize health float getHealth(){return health;} void setHealth(float value){ health = value;}; private: float health; }; float subtractHealthFrom(Entity& ent, float damage) { ent.setHealth(ent.getHealth() - damage); return ent.getHealth(); }; int main () { Entity character(10.0f); Entity enemy(10.0f); cout << "Hero Life: "; cout << subtractHealthFrom(character, 2.0f) <<endl; cout << "Monster Life: "; cout << subtractHealthFrom(enemy, 2.0f) <<endl; cout << "Hero Life: "; cout << subtractHealthFrom(character, 2.0f) <<endl; cout << "Monster Life: "; cout << subtractHealthFrom(enemy, 2.0f) <<endl; }; Struct method, they say, should solve this problem. How can I continously deduct hp from the enemy? Whenever I deduct something, it would return to its original value -_-

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • tic tac toe game ai as3

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in actionscript3 and am trying to understand the ideas behind the ai used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Optimal sprite size for rotations

    - by Panda Pajama
    I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it. However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged. This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel. So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites). It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game. So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question. What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image. This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference. I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

  • cocos2d event handler not fired when reentering scene

    - by Adam Freund
    I am encountering a very strange problem with my cocos2d app. I add a sprite to the page and have an event handler linked to it which replaces the scene with another scene. On that page I have another button to take me back to the original scene. When I am back on the original scene, the eventHandler doesn't get fired when I click on the sprite. Below is the relevant code. Thanks for any help! CCMenuItemImage *backBtnImg = [CCMenuItemImage itemWithNormalImage:@"btn_back.png" selectedImage:@"btn_back_pressed.png" target:self selector:@selector(backButtonTapped:)]; backBtnImg.position = ccp(45, 286); CCMenu *backBtn = [CCMenu menuWithItems:backBtnImg, nil]; backBtn.position = CGPointZero; [self addChild:backBtn]; EventHandler method (doesn't get called when the scene is re-entered). (void)backButtonTapped:(id)sender { NSLog(@"backButtonTapped\n"); CCMenuItemImage *backButton = (CCMenuItemImage *)sender; [backButton setNormalImage:[CCSprite spriteWithFile:@"btn_back_pressed.png"]]; [[CCDirector sharedDirector] replaceScene:[CCTransitionFade transitionWithDuration:.25 scene:[MenuView scene] withColor:ccBLACK]]; }

    Read the article

  • OpenGL quake 3 shader file for objects (for trees)

    - by mlodziaszka
    I decided to add to my game few trees, I already quake 3 model loader (md3) its for characters and method for texture drawing is store in *.ini file. I found a package of trees in MD3 and I have no problem with loading model alone, but there is a *.shader file and i have no idea how to load it to draw texture properly. Tree pack: http://www.custommapmakers.org/wiki/index.php/Models:GR_Trees_set I do not have to use exactly this format, I can write another loader, but trees in *.obj or .3ds look even harder

    Read the article

  • Path tables or real time searching for AI?

    - by SirYakalot
    What is the more common practice in commercial games; path lookup tables or real time searches? I've read that in many games path lookup tables are pre-calculated and baked into each map, so to speak, then steering behaviour is used to handle dynamic obstacles. or is it better practice to use optimised hierarchical A* searches? I understand the pro's and cons of each, I'm just curious as to what is most often used in the industry.

    Read the article

  • Import FBX with multiple meshes into UDK

    - by Tom
    I used this script to generate a few buildings that I was hoping to import into UDK. Each building is made of about 1000 separate objects. When I export a building as FBX and import the file into UDK it breaks it up into its individual objects again, so I was wondering how I would avoid this. Whether there was a tool to combine all of the objects into one mesh automatically before exporting or if I could prevent UDK from breaking them upon import.

    Read the article

  • Use PathModifier of MoveModifier for Tower of Defense Game

    - by Siddharth
    In my game I want to move enemy on the fixed path so that I have establish manual grid structure for that purpose not used tile map. Game contain multiple level and the path will be different for each level and also multiple fixed path exist for each level. So my question is, What I have to use MoveModifier or PathModifier for my game ? Also mention I have to use WayPoint or not. Further detail you all are free to ask. Please help me to decide what to do.

    Read the article

  • How do produce a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >