Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 412/1071 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Deserialize inherited classes into the same list in XNA

    - by M0rgenstern
    I am writing a Gui for a game (for what else ...). Therefor I wrote a class GuiElement which has some serializeable fields. From this class I deflect a Class "Button" which has one serializeable field more. Furthermore, I have a Class GuiWindow, which is as well deflected from "GuiElement". An Object of this Class has a Field "HandledElements" of the type "List". To know the layout of the Menues, I use XML-Files, which look like that (for example): <?xml version="1.0" encoding="utf-8" ?> <XnaContent xmlns:Generic="System.Collections.Generic"> <Asset Type="System.Collections.Generic.List[GUI.GuiWindow]"> <Item> <Position>0 0</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>true</IsActive> <Name>MainMenu</Name> <HandledElements> <Item> <Position>100 100</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>true</IsActive> <Name>Optionen</Name> <Caption>Optionen</Caption> </Item> </HandledElements> </Item> <Item> <Position>0 0</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>false</IsActive> <Name>Options</Name> <HandledElements> </HandledElements> </Item> </Asset> </XnaContent> As you can see, the first window has in its "HandledElements" List an Item with the Field . This is a field which only a Button has. The Problem is now: I can't deserialize this XML file, because GuiElement does not have this Field, it only has the few fields above. I thought it would know automatically which Class to use,but it doesn't. Do I really have to give my windows a list for each child class of GuiElement? Or can I do another workaround?

    Read the article

  • Maya .IFF plugins for Gimp

    - by Kara Marfia
    Maya's preferred format for saving off a UV Snapshot is its own .IFF format, so I was hoping to find a plugin allowing Gimp 2 (Windows) to read it. I've found plenty of plugins for different linux distros, but none are win-friendly (that I can discern - admittedly I'm no whiz with Gimp). Does anyone know of one? Alternately, .tiff seems to work just fine, so if there's no good reason to bother fiddling with IFFs, I'd appreciate the input there, too. (sorry if this isn't on-topic)

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • Time calculation between openGL update calls.

    - by Vijayendra
    In XNA, the system calls update and draw function with the time information. This contains information such as how much time has passed since last update was called. This makes easy to integrate time and do animation calculation accordingly. But I dont see any such mechanism in openGL. I see openGL requires programmers to have their own implementation which could be buggy or inefficient. Is there any standard (and efficient) code that demonstrate this practice in openGL?

    Read the article

  • Cocos2d and Body with few collision shapes using chipmunk

    - by Eimantas
    I'm using Cocos2d (0.99.5) with chipmunk physics engine. Currently I'm trying to place a body into space which is combined from few circle shapes. Let's say I have a corresponding sprite image with displays atom (nucleus + 3 electrons around it. Something like this without orbit lines). In it's simplest form - only one circle shape at the center should be enough which would detect collisions from other objects with nucleus. Now I'd like to add other circle shapes for each electron. How can I do that? Now when I add those shapes to the body and add the body into chipmunk space - the shapes (together with the body/sprite) start flickering and spinning with no recognizable pattern (or reason for that matter).

    Read the article

  • Smooth animation when using fixed time step

    - by sythical
    I'm trying to implement the game loop where the physics is independent from rendering but my animation isn't as smooth as I would like it to be and it seems to periodically jump. Here is my code: // alpha is used for interpolation double alpha = 0, counter_old_time = 0; double accumulator = 0, delta_time = 0, current_time = 0, previous_time = 0; unsigned frame_counter = 0, current_fps = 0; const unsigned physics_rate = 40, max_step_count = 5; const double step_duration = 1.0 / 40.0, accumulator_max = step_duration * 5; // information about the circ;e (position and velocity) int old_pos_x = 100, new_pos_x = 100, render_pos_x = 100, velocity_x = 60; previous_time = al_get_time(); while(true) { current_time = al_get_time(); delta_time = current_time - previous_time; previous_time = current_time; accumulator += delta_time; if(accumulator > accumulator_max) { accumulator = accumulator_max; } while(accumulator >= step_duration) { if(new_pos_x > 1330) velocity_x = -15; else if(new_pos_x < 70) velocity_x = 15; old_pos_x = new_pos_x; new_pos_x += velocity_x; accumulator -= step_duration; } alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; al_clear_to_color(al_map_rgb(20, 20, 40)); // clears the screen al_draw_textf(font, al_map_rgb(255, 255, 255), 20, 20, 0, "current_fps: %i", current_fps); // print fps al_draw_filled_circle(render_pos_x, 400, 15, al_map_rgb(255, 255, 255)); // draw circle // I've added this to test how the program will behave when rendering takes // considerably longer than updating the game. al_rest(0.008); al_flip_display(); // swaps the buffers frame_counter++; if(al_get_time() - counter_old_time >= 1) { current_fps = frame_counter; frame_counter = 0; counter_old_time = al_get_time(); } } I have added a pause during the rendering part because I wanted to see how the code would behave when a lot of rendering is involved. Removing it makes the animation smooth but then I'll have to make sure that I don't let the frame rate drop too much and that doesn't seem like a good solution. I've been trying to fix this for a week and have had no luck so I'd be very grateful if someone can read through my code. Thank you! Edit: I added the following code to work out the actual velocity (pixels per second) of the ball each time the ball is rendered and surprisingly it's not constant so I'm guessing that's the issue. I'm not sure why it's not constant. alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; cout << (render_pos_x - old_render_pos) / delta_time << endl; old_render_pos = render_pos_x;

    Read the article

  • How to get all keys values of the player prefs in unity [java script ]

    - by Akari
    in the first test game I've developed if the player passed all the levels and win , he must enter his name ... so his name and his score will be stored in a player prefs : and there is another scene that displays the names and scores of all the user passed the game : I've searched from the morning and try all the ways I know and finally I failed to perform this .... is it possible to display all the keys values previously stored in the player prefs ??? or can someone to provide me by a JavaScript to do this ???? thanks...

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • Help, i cant reference my vars!

    - by SystemNetworks
    I have a sub-class(let's call it sub) and it contains all the function of an object in my game. In my main class(Let's call it main), i connect my sub to main. (Example sub Code: s = new sub(); Then I put my sub function at the update method. Code: s.myFunc(); Becuase in my sub, i have booleans, integers, float and more. The problem is that I don't want to connect my main class to use my main's int, booleans and others. If i connect it, it will have a stack overflow. This is what I put in my sub: Code: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Armory { package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Store { public Integer wood; public Float probePositionX; public Float probePositionY; public Boolean StoreOn; public Boolean darkBought; public Integer money; public Integer darkEnergy; public Integer lifeLeft; public Integer powerLeft; public void darkStores(GameContainer gc, StateBasedGame sbg, GameContainer gc2) { Input input1 = gc.getInput(); //Player need wood to enter(200) If not there will be an error. if(wood>=200) { //Enter Store! if(input1.isKeyDown(Input.KEY_Q)) { //Player must be in this cord! if((probePositionX>393 && probePositionX<555) && (probePositionY< 271 && probePositionY>171)) { //The Store is On StoreOn=true; } } } } } In my main (update function) I put: Code: s.darkBought = darkBought; s.darkEnergy = darkEnergy; s.lifeLeft = lifeLeft; s.money = money; s.powerLeft = powerLeft; s.probePositionX = probePositionX; s.probePositionY = probePositionY; s.StoreOn = StoreOn; s.wood = wood; s.darkStores(gc, sbg, gc); The problem is when I go to the place, and I press q, nothing shows up. It should show another image. Is there anything wrong???

    Read the article

  • Largest sphere inside a frustum

    - by Will
    How do you find the largest sphere that you can draw in perspective? Viewed from the top, it'd be this: Added: on the frustum on the right, I've marked four points I think we know something about. We can unproject all eight corners of the frusum, and the centres of the near and far ends. So we know point 1, 3 and 4. We also know that point 2 is the same distance from 3 as 4 is from 3. So then we can compute the nearest point on the line 1 to 4 to point 2 in order to get the centre? But the actual math and code escapes me. I want to draw models (which are approximately spherical and which I have a miniball bounding sphere for) as large as possible. Update: I've tried to implement the incircle-on-two-planes approach as suggested by bobobobo and Nathan Reed : function getFrustumsInsphere(viewport,invMvpMatrix) { var midX = viewport[0]+viewport[2]/2, midY = viewport[1]+viewport[3]/2, centre = unproject(midX,midY,null,null,viewport,invMvpMatrix), incircle = function(a,b) { var c = ray_ray_closest_point_3(a,b); a = a[1]; // far clip plane b = b[1]; // far clip plane c = c[1]; // camera var A = vec3_length(vec3_sub(b,c)), B = vec3_length(vec3_sub(a,c)), C = vec3_length(vec3_sub(a,b)), P = 1/(A+B+C), x = ((A*a[0])+(B*a[1])+(C*a[2]))*P, y = ((A*b[0])+(B*b[1])+(C*b[2]))*P, z = ((A*c[0])+(B*c[1])+(C*c[2]))*P; c = [x,y,z]; // now the centre of the incircle c.push(vec3_length(vec3_sub(centre[1],c))); // add its radius return c; }, left = unproject(viewport[0],midY,null,null,viewport,invMvpMatrix), right = unproject(viewport[2],midY,null,null,viewport,invMvpMatrix), horiz = incircle(left,right), top = unproject(midX,viewport[1],null,null,viewport,invMvpMatrix), bottom = unproject(midX,viewport[3],null,null,viewport,invMvpMatrix), vert = incircle(top,bottom); return horiz[3]<vert[3]? horiz: vert; } I admit I'm winging it; I'm trying to adapt 2D code by extending it into 3 dimensions. It doesn't compute the insphere correctly; the centre-point of the sphere seems to be on the line between the camera and the top-left each time, and its too big (or too close). Is there any obvious mistakes in my code? Does the approach, if fixed, work?

    Read the article

  • Using 2D sprites and 3D models together

    - by Sweta Dwivedi
    I have gone through a few posts that talks about changing the GraphicsDevice.BlendState and GraphicsDevice.DepthStencilState (SpriteBatch & Render states). . however even after changing the states .. i cant see my 3D model on the screen.. I see the model for a second before i draw my video in the background. . Here is the code: case GameState.InGame: GraphicsDevice.Clear(Color.AliceBlue); spriteBatch.Begin(); if (player.State != MediaState.Stopped) { videoTexture = player.GetTexture(); } Rectangle screen = new Rectangle(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); // Draw the video, if we have a texture to draw. if (videoTexture != null) { spriteBatch.Draw(videoTexture, screen, Color.White); if (Selected_underwater == true) { spriteBatch.DrawString(font, "MaxX , MaxY" + maxWidth + "," + maxHeight, new Vector2(400, 10), Color.Red); spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); foreach (AnimatedSprite a in aSprites) { a.Draw(spriteBatch); } } if(Selected_planet == true) { spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); spriteBatch.Draw(videoTexture,screen,Color.White); GraphicsDevice.BlendState = BlendState.Opaque; GraphicsDevice.DepthStencilState = DepthStencilState.Default; GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap; foreach (_3DModel m in Solar) { m.DrawModel(); } } spriteBatch.End(); break;

    Read the article

  • Exporting XNA class library as a DLL file

    - by Will Bagley
    I have downloaded an open source project that I intend to use with my current game. The download came with all the class files from the original project as well as a pre-compiled DLL file representing the project. I was able to easily link this DLL with my current project and get it working just fine, no problems there. The problem I now have is that I want to make a couple of changes to the original libraries (extend its functionality a bit to better suit my needs) and re-export the class library as a DLL again, but I have no clue how to do this. Is there some simple way in VS where I can just take the class library and export/compile it as a DLL file again or is there more to it than that? This seems like something that should be pretty simple but my efforts to find an answer have so far come up with nothing. Thanks in advance.

    Read the article

  • Custom Music in Skyrim's Creation Kit?

    - by CptSupermrkt
    Can you bring in external music such as mp3s? If so, how? I didn't see anything about this in the wiki Bethesda released. Also how does this work with regards to the Steam Workshop? Don't imagine they would appreciate uploading copyrighted content. I don't particularly care about making a public mod, I just want to screw around privately and create dungeons/towns using music from some of my favorite games. Thanks.

    Read the article

  • What's the best game engine to use for my PC game project? [closed]

    - by user19860
    I'm in the planning phase of creating an action-rpg for the PC, and I'd like to create a League of Legends style look for the game (animated/cartoony). Any idea which engine best replicates this look? I ask because when I look at a lot of the UDk/Unreal games, they've all got the more realistic 3-D look that I'd like to avoid, so I was wondering if an alternate look was possible on that type of engine. Source SDK and Unity also look very interesting, I just don't know what types of visual capabilities these engines have. Thanks in advance.

    Read the article

  • How to use the zoom gesture in libgdx?

    - by user3452725
    I found the example code for the GestureListener class, but I don't understand the zoom method: private float initialScale = 1; public boolean zoom (float originalDistance, float currentDistance) { float ratio = originalDistance / currentDistance; //I get this camera.zoom = initialScale * ratio; //This doesn't make sense to me because it seems like every time you pinch to zoom, it resets to the original zoom which is 1. So basically it wouldn't 'save' the zoom right? System.out.println(camera.zoom); //Prints the camera zoom return false; } Am I not interpreting this right?

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • How to limit click'n'drag movement to an area?

    - by Vexille
    I apologize for the somewhat generic title. I'm really don't have much clue about how to accomplish what I'm trying to do, which is making it harder even to research a possible solution. I'm trying to implement a path marker of sorts (maybe there's a most suitable name for it, but this is the best I could come up with). In front of the player there will be a path marker, which will determine how the player will move once he finishes planning his turn. The player may click and drag the marker to the position they choose, but the marker can only be moved within a defined working area (the gray bit). So I'm now stuck with two problems: First of all, how exactly should I define that workable area? I can imagine maybe two vectors that have the player as a starting point to form the workable angle, and maybe those two arcs could come from circles that have their center where the player is, but I definetly don't know how to put this all together. And secondly, after I've defined the area where the marker can be placed, how can I enforce that the marker should only stay within that area? For example, if the player clicks and drags the marker around, it may move freely within the working area, but must not leave the boundaries of the area. So for example, if the player starts dragging the marker upwards, it will move upwards until it hits he end of the working area (first diagram below), but if after that the player starts dragging sideways, the marker must follow the drag while still within the area (second diagram below). I hope this wasn't all too confusing. Thanks, guys.

    Read the article

  • rts libgdx design?

    - by user36531
    I am attempting to create a simple rts multi-player strategy game using libgdx. I am stumped at the moment. I want the underlying game world to run at all times and be aware of where all items are on the map.. so if player A logs in and moves unit to some location on the grid and logs off, that unit info is still there and can be accessed again by player A when they log back on to move somewhere else (if it didnt get attacked during the playerA was logged off). How can i do this? Do i create a main game world on the server and when players connect make client just sequentially request whats in each visible tile? Is there an easier way to get this done? Or go SQL route? Whats better?

    Read the article

  • Developing an ELO like point system for a multiplayer gaming site

    - by Alejandro Piad
    I'm currently working on a gaming site where users will submit virtual players for different games, like Chess, Nash, Backgammon, Go, etc. The idea is that users don't compete themselves, but through their virtual players. There will be leagues, tournaments, and other competition formats. The question is which would be a good rating system for users in this environment. Take into account that every user may have many different virtual players playing in many different games. As a general guideline I would like to guarantee the following properties: Users who have a lot of mediocre players should not score higher than users with a few very good players. A user with a high rating should not be penalized if he adds a new bad player, until he has had enough time to improve his player. Users who don't play often should not score higher than users who play every day. Thanks in advance.

    Read the article

  • In a Tower defense game, how to do buffs/debuffs

    - by Gabe
    The question is at the very bottom. If you understand Buffs/Debuffs in tower defense games then you should skip the bulk of this question and go to the bottom (seperated with the long line) I plan on making an IPhone TD game. The fact that its an iPhone game isn't relevant but I am coding in Objective-c with Cocos2D. I am relatively inexperienced in the field of game design so I'm looking for some advice from someone experienced in this field. In tower defense, there are two things that are relevant to my question: towers/enemies (both have their own classes/children). They each have stats like hp, damage, speed, etc. I want to add buffs/defuffs, for instance: Towers A,B and C each have 15 base damage. Tower D would be a buff tower with no damage, a tower with an AOE(area of effect) aura that gives 10% damage to all towers in range. Tower E might slow enemies in its AOE, a debuff. Stuff like that. The same could go for enemies. Enemy A is a boss that has a slow aura that affects towers and slows their base attack speed or something along those lines. So the question is, what would be the most effective way to implement this? If it was just towers then I would just mess around with the tower classes, but since tower classes and enemy classes are both affected, should I make a buff class? TD games can consume quite a bit of memory with large amounts of creeps and towers, and buffs I feel like would also consume quit a bit... So I'm trying to be as effective as possible.

    Read the article

  • Double sides face with two normals

    - by Marnix
    I think this isn't possible, but I just want to check this: Is it possible to create a face in opengl that has two normals? So: I want the inside and outside of some cilinder to be drawn, but I want the lights to do as expected and not calculate it for the normal given. I was trying to do this with backface culling off, so I would have both faces, but the light was wrongly calculated of course. Is this possible, or do I have to draw an inside and an outside? So draw twice?

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • OpenGL: Filtering/antialising textures in a 2D game

    - by futlib
    I'm working on a 2D game using OpenGL 1.5 that uses rather large textures. I'm seeing aliasing effects and am wondering how to tackle those. I'm finding lots of material about antialiasing in 3D games, but I don't see how most of that applies to 2D games - e.g. antisoptric filtering seems to make no sense, FSAA doesn't sound like the best bet either. I suppose this means texture filtering is my best option? Right now I'm using bilinear filtering, I think: glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); From what I've read, I'd have to use mipmaps to use trilinear filtering, which would drive memory usage up, so I'd rather not. I know the final sizes of all the textures when they are loaded, so can't I somehow size them correctly at that point? (Using some form of texture filtering).

    Read the article

  • Generating geometry when using VBO

    - by onedayitwillmake
    Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >