Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 548/1292 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • DX10 sprite and pixel shader

    - by Alex Farber
    I am using ID3DX10Sprite to draw 2D image on the screen. 3D scene contains only one textured sprite placed over the whole window area. Render method looks like this: m_pDevice-ClearRenderTargetView(...); m_pSprite-Begin(D3DX10_SPRITE_SORT_TEXTURE); m_pSprite-DrawSpritesImmediate(&m_SpriteDefinition, 1, 0, 0); m_pSprite-End(); Now I want to make some transformations with the sprite texture in a shader. Currently the program doesn't work with shader. How it is possible to add pixel shader to the program with this structure? Inside the shader, I need to set all colors equal to red, and multiply pixel values by some coefficient. Something like this: float4 TexturePixelShader(PixelInputType input) : SV_Target { float4 textureColor; textureColor = shaderTexture.Sample(SampleType, input.tex); textureColor.x = textureColor.x * coefficient; textureColor.y = textureColor.x; textureColor.z = textureColor.x; return textureColor; }

    Read the article

  • Making retro games: Any good known game architectures?

    - by A.Quiroga
    I'm trying to do a remake of Snowbros . I'm doing it using libgdx but at each time i must try to thought how things got done . For example the physics of the jump and collisions . It seams to be time perfect , but i use the deltaTime to try to aproximate the value in game . I think in this case maybe its using some calcs with processor Hz , but i don't know. Then the simple question , is there any resources of how did they programm this games? Or any idea of the simple ideas repeated each game to game in the old style retro games.

    Read the article

  • How can I get my meshes to work with Bullet Physics?

    - by Molmasepic
    The problem is that I'm trying to use my meshes with Bullet Physics for the collision part of my game. When I attempted doing this method with my GLM(model loading library by nate robins) model, I get a segmentation fault in the debug, so I figured that it doesnt like the coordinate variables of the model. If i use blender to export my model as a collision file, what type of file should I use? I have heard of a .bullet exporter, but i dont know hot to integrate this python script into my Blender 2.5 program.

    Read the article

  • What are the different ways to texture a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a Heightmap, and to texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use a perlin noise to generate a terrain and then texture it). I already learnt about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. Also I don't know how i'll draw roads or dirt paths with that. I hope you understood me despite my english! If you don't, basically, here what I want to do: How do I texture a randomly generated terrain? :) Thank you for your answers!

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • OpenGL lighting with dynamic geometry

    - by Tank
    I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation: The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles/tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • How can I fix latency problems for car game?

    - by Freddy
    Basically I'm trying to make a online car racing game for IOS using Game Center real time multiplayer. I have setup a timer that sends data every 0.02 seconds to the other player with the current position and current angle. However sometimes, it will take LONGER then these 0.02 seconds for the package to be sent and then received. In this case i have implemented a method that "calculate" what the next position should be if no position is received based on the last position and angle. However, when the data then receives for let say 0.04 seconds after, it will change back to the last position, which will result in the car "jumping" back and lag. And If i just keep ignoring the data it will never take any input from the other user. Is their any way to prevent this? I suppose this needs to be fixed with some client-sided algorithm.

    Read the article

  • How to add isometric (rts-alike) perspective and scolling in unity?

    - by keinabel
    I want to develop some RTS/simulation game. Therefore I need a camera perspective like one knows it from Anno 1602 - 1404, as well as the camera scrolling. I think this is called isometric perspective (and scrolling). But I honestly have no clue how to manage this. I tried some things I found on google, but they were not satisfying. Can you give me some good tutorials or advice for managing this? Thanks in advance

    Read the article

  • Updating sprite location with controls

    - by iQue
    So ive got a character in a 2D game using surfaceView that I want to be able to move using a button(eventually a joystick), but my game crashes as soon as I try to move my sprite. This is my onTouch-method for my steering button: public void handleActionDown(int eventX, int eventY) { if (eventX >= (x - bitmap.getWidth() / 2) && (eventX <= (x + bitmap.getWidth()/2))) { if (eventY >= (y - bitmap.getHeight() / 2) && (y <= (y + bitmap.getHeight() / 2))) { setTouched(true); } else { setTouched(false); } } else { setTouched(false); } and if I try to put this in my update-method: public void update() { x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } the sprite moves on its own just fine, but as soon as I add: public void update() { if(steering.isTouched()){ x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } the game crashes. Does any1 know why this is or how to fix it? I cannot figure it out. Im using MotionEvent.ACTION_DOWN to check if the user if pressing the screen. }

    Read the article

  • Falling CCSprites

    - by Coder404
    Im trying to make ccsprites fall from the top of the screen. Im planning to use a touch delegate to determine when they fall. How could I make CCSprites fall from the screen in a way like this: -(void)addTarget { Monster *target = nil; if ((arc4random() % 2) == 0) { target = [WeakAndFastMonster monster]; } else { target = [StrongAndSlowMonster monster]; } // Determine where to spawn the target along the Y axis CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target z:1]; // Determine speed of the target int minDuration = target.minMoveDuration; //2.0; int maxDuration = target.maxMoveDuration; //4.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2, actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 1; [_targets addObject:target]; } This code makes CCSprites move from the right side of the screen to the left. How could I change this to make the CCSprites to move from the top of the screen to the bottom?

    Read the article

  • Best Frameworks/libraries/engines for 2D multiplayer C# Webbased RPG

    - by Thirlan
    Title is a mouthful but important because I'm looking to meet a specific criteria and it's complex enough that I need a lot of help in finding what I'm looking for. I really want people's suggestions because I trust it a lot more than anything else, so I just need to clearly define what it is I want heh : P Game is a 2D RPG. Think of Secret of Mana. Game is online multiplayer, but not MMO sized. Game must be webbased! I'm looking to the future and want to hit as many platforms as possible. I'm leaning to Webgl because of this, but still looking around. Since the users are seeing the game through the webbrowser the front-end should be mainly responsible for drawing, taking input and some basic checking such as preliminary collision detection. This is important because it means the game engine is NOT on the client's machine. The server should be responsible for the game engine and all the calculations. This means the server is doing all the work and the client is mostly a dumb terminal. Server language is c# I'm looking for fast project execution so I want to use as many pre-existing tools as possible. This would make sense because I'm making a game here, not an engine. I'm not creating some new revolutionary graphics or pushing the physics engines to the next level. Preference for commercially supported tools. For game mechanics reasons and for reasons 4 and 5, don't think I can use existing 2D rpg engines. I've seen them out there and I fear that if I try and use them they will have too many restrictions, but will be happy to hear out suggestions. So all this means I need a game engine on the server, or maybe just a physics engine, and then I need another engine/library to draw everything that the server is sending to the client on the webbrowser. Maybe this is how 50% of games work on the web and there are plenty of frameworks that support this! I wouldn't actually don't know : ( but my gut is telling me that most webgames are single player and 90% of the game is running on the client. So... any suggestions?

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Determining relative velocities on impact?

    - by meds
    I'm trying to figure out a way to determine the relative velocity of a body colliding with another in a 2D environment. For example if one body is moving at (1,0) and another traveling behind it collides with it from behind at (2,0) the velocity of the impact relative to the first body was (1,0). I need a method which takes in two velocities, one velocity belonging to the body the velocity is being measured against, and the other for the impacting body and return the relative velocity.

    Read the article

  • How do I plot individual pixels using the XNA APIs?

    - by izb
    If I wanted to fill my game screen with individually coloured pixels, how would I do this? For example, if I wanted to write a 'game of life'-type game where each pixel was a cell, how would I achieve this using XNA? I've tried just calling SetData() on a Texture2D object using a screen-sized array of Color values, but it complains with: You may not call SetData on a resource while it is actively set on the GraphicsDevice. Unset it from the device before calling SetData. How do I do as it asks? Or better still... is there an alternative, better, efficient way to fill a screen with arbitrary pixels?

    Read the article

  • openGL managing images, VBOs and shaders

    - by roxlu
    I'm working on a game where I use shaders with vertex attributes (so not immediate mode). I'm drawing lots of images and changing the width/height of the quads I use to draw them a lot. To optimize this it's probably a good idea to have one buffer but then one needs to update the complete buffer when one image changes (or only a part of the buffer using glBufferSubData...) I was just wondering what kind of strategies you guys are using?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • Finding the shorter turning direction towards a target

    - by A.B.
    I'm trying to implement a type of movement where the object gradually faces the target. The problem I've run into is figuring out which turning direction is faster. The following code works until the object's orientation crosses the -PI or PI threshold, at which point it will start turning into the opposite direction void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto distance = distanceBetweenPoints(position, destination); auto direction = angleBetweenPoints(position, destination); /// Decides whether incrementing or decrementing orientation is faster /// the next line is the problem if (atan2(sin(direction - rotation), cos(direction - rotation)) > 0 ) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } if (distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); } 'rotation' and 'rotation_speed' are implemented as custom data type for radians which cannot have values lower than -PI and greater than PI. Any excess or deficit "wraps around". For example, -3.2 becomes ~3.08.

    Read the article

  • i am going to start learning to develop games, and have a very importent question

    - by levi s.
    so i am going to be starting to start learning to develop games soon, and i have already learned the basics of java. before i really go balls out. am i making a bad choice of language? should i stop now and move to c++ or c#? will that hinder me? is java going to hinder me worse? im kinda having regrets on saying "oh hey minecraft was made in java, it must be best!" im mainly asking, what should i do?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Create a kind of Interface c++ [migrated]

    - by Liuka
    I'm writing a little 2d rendering framework with managers for input and resources like textures and meshes (for 2d geometry models, like quads) and they are all contained in a class "engine" that interacts with them and with a directX class. So each class have some public methods like init or update. They are called by the engine class to render the resources, create them, but a lot of them should not be called by the user: //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" loadtexture(); gettexture(); //called by the user } class Engine { private: Tmanager texManager; public: Init() { //initialize all the managers } Render(){...} Update(){...} Tmanager* GetTManager(){return &texManager;} //to get a pointer to the manager //if i want to create or get textures } In this way the user, calling Engine::GetTmanager will have access to all the public methods of Tmanager, including init update and rendertexture, that must be called only by Engine inside its init, render and update functions. So, is it a good idea to implement a user interface in the following way? //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" friend class Tmanager_UserInterface; operator Tmanager_UserInterface*(){return reinterpret_cast<Tmanager_UserInterface*>(this)} } class Tmanager_UserInterface : private Tmanager { //delete constructor //in this class there will be only methods like: loadtexture(); gettexture(); } class Engine { private: Tmanager texManager; public: Init() Render() Update() Tmanager_UserInterface* GetTManager(){return texManager;} } //in main function //i need to load a texture //i always have access to Engine class engine-GetTmanger()-LoadTexture(...) //i can just access load and get texture; In this way i can implement several interface for each object, keeping visible only the functions i (and the user) will need. There are better ways to do the same?? Or is it just useless(i dont hide the "framework private functions" and the user will learn to dont call them)? Before i have used this method: class manager { public: //engine functions userfunction(); } class engine { private: manager m; public: init(){//call manager init function} manageruserfunciton() { //call manager::userfunction() } } in this way i have no access to the manager class but it's a bad way because if i add a new feature to the manager i need to add a new method in the engine class and it takes a lot of time. sorry for the bad english.

    Read the article

  • Writing a Master's Thesis on evaluating visual scripting systems

    - by user1107412
    I am thinking to write my Master's thesis around theorizing, and then implementing a PlayMaker or Kismet-like (building game logic by visually arranging FSMs) tool in Unity. The only thing I am still concerned about is the actual research question that I should pose. I was kinda hoping that the more experienced game designers out there might know. Update: What about reducing the use of visual programming to graphically designing FSM-Action-Transition flows, which can then be attached to game entities (very much like http://playmaker.com does it)?

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >