Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 561/1758 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Handling window resize with arbitrary aspect ratios

    - by DormoTheNord
    I'm currently making a 2D game using SFML. I want the aspect ratio to be maintained when the user resizes the window. I also want the game to work with any arbitrary aspect ratio (like any media player would). Here is the code I have so far: void os::GameEngine::setCameraViewport() { sf::FloatRect tempViewport; float viewAspectRatio = (float)aspectRatio.x / aspectRatio.y; float screenAspectRatio = (float)gameWindow.getSize().x / gameWindow.getSize().y; if (viewAspectRatio > screenAspectRatio) { // Viewport is wider than screen, fit on X } else if (viewAspectRatio < screenAspectRatio) { // Screen is wider than viewport, fit on Y } else // window aspect ratio matches view aspect ratio { tempViewport.height = 1; tempViewport.width = 1; tempViewport.left = 0; tempViewport.top = 0; } viewport = tempViewport; camera.setViewport(viewport); gameWindow.setView(camera); } The problem is I'm having trouble with the logic to determine the properties of the viewport.

    Read the article

  • Formula for three competing heroes, each has one they can beat and one they're beaten by

    - by Georgiadis Abraam
    I am trying to design a game for a project I have, The main idea is: 3 Types of heroes 3 Stats per hero There are no levels involved so the differences must be located on stats. Fight logic - The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week I am trying to find a stats based formula that will allow me to fix this but I can't, I was meddling with numbers yesterday and it was decent but I can't extract the formula out of it. Could you please guide me or give me hints on how should I start creating formulas on a Non lvl game that fulfills the fight logic?

    Read the article

  • vector rotations for branches of a 3d tree

    - by freefallr
    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused. I'm using an l-system (a recursive algorithm for generating branches). The trunk of the tree is the root node. It's orientation is aligned to the y axis. In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk. I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch. My questions then: for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ? for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also. e.g. using glm::normalize; using glm::rotateX; using glm::vec4; using glm::mat4; using glm::rotate; vec4 vYAxis = vec4(0.0f, 1.0f, 0.0f, 0.0f); vec4 vInitial = normalize( rotateX( vYAxis, 10.0f ) ); mat4 mRotation = mat4(1.0); // trunk rotation matrix = identity * initial orientation vector mRotation *= vInitial; // first branch = parent rotation matrix * this branches rotations mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f ); // x rotation mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f ); // z rotation Are my maths and approach correct, or am I completely wrong? Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

    Read the article

  • Complexity of defense AI

    - by Fredrik Johansson
    I have a non-released game, and currently it's only possible to play with another human being. As the game rules are made up by me, I think it would be great if new players could learn basic game play by playing against an AI opponent. I mean it's not like Tennis, where the majority knows at least the fundamental rules. On the other hand, I'm a bit concerned that this AI implementation can be quite complex. I hope you can help me with an complexity estimation. I've tried to summarize the gameplay below. Is this defense AI very hard to do? Basic Defense Game Play Player Defender can move within his land, i.e. inside a random, non-convex, polygon. This land will also contain obstacles modeled as polygons, that Defender has to move around. Player Attacker has also a land, modeled as another such polygon. Assume that Defender shall defend against Attacker. Attacker will then throw a thingy towards Defender's land. To be rewarded, Attacker wants to hit Defender's land, and Defender will want to strike away the thingy from his land before it stops to prevent Attacker from scoring. To feint Defender, Attacker might run around within his land before the throw, and based on these attacker movements Defender shall then continuously move to the best defense position within his land.

    Read the article

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

  • Would someone please explain Octree Collisions to me?

    - by A-Type
    I've been reading everything I can find on the subject and I feel like the pieces are just about to fall into place, but I just can't quite get it. I'm making a space game, where collisions will occur between planets, ships, asteroids, and the sun. Each of these objects can be subdivided into 'chunks', which I have implemented to speed up rendering (the vertices can and will change often at runtime, so I've separated the buffers). These subdivisions also have bounding primitives to test for collision. All of these objects are made of blocks (yeah, it's that kind of game). Blocks can also be tested for rough collisions, though they do not have individual bounding primitives for memory reasons. I think the rough testing seems to be sufficient, though. So, collision needs to be fairly precise; at block resolution. Some functions rely on two blocks colliding. And, of course, attacking specific blocks is important. Now what I am struggling with is filtering my collision pairs. As I said, I've read a lot about Octrees, but I'm having trouble applying it to my situation as many tutorials are vague with very little code. My main issues are: Are Octrees recalculated each frame, or are they stored in memory and objects are shuffled into different divisions as they move? Despite all my reading I still am not clear on this... the vagueness of it all has been frustrating. How far do Octrees subdivide? Planets in my game are quite large, while asteroids are smaller. Do I subdivide to the size of the planet, or asteroid (where planet is in multiple divisions)? Or is the limit something else entirely, like number of elements in the division? Should I load objects into the octrees as 'chunks' or in the whole, then break into chunks later? This could be specific to my implementation, I suppose. I was going to ask about how big my root needed to be, but I did manage to find this question, and the second answer seems sufficient for me. I'm afraid I don't really get what he means by adding new nodes and doing subdivisions upon adding new objects, probably because I'm confused about whether the tree is maintained in memory or recalculated per-frame.

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • Save programmatically created Mesh to .X Files using SlimDX throw null exception

    - by zionpi
    Mesh has been created properly using SlimDX,but when I use the following line: Mesh.ToXFile(barMesh, "foo.x", XFileFormat.Text,CharSet.Unicode); It throws NullReferenceException,through monitor window I can see barMesh is not null, inside the mesh structrue, SkinInfo is null. If SkinInfo is the problem,then how can I initialize it properly?Internet doesn't seems have much information on this.

    Read the article

  • Generating grammatically correct MUD-style attack descriptions

    - by Extrakun
    I am currently working on a text based game, where the outcome of a combat round goes something like this %attacker% inflicts a serious wound (12 points damage) on %defender% Right now, I just swap %attacker% with the name of the attacker, and %defender% for the name of the defender. However, the description works, but don't read correctly. Since the game is just all text, I don't want to resort to generic descriptions (Such as "You use Attack on Goblin for 5 damage", which arguably solve the problem) How do I generate correct descriptions for cases where %attacker% refers to "You", the player? "You inflicts..." is wrong "Bees", or other plural? I need somehow to know I should prefix the name with a "The " If %attacker% is a generic noun, such as "Goblin", it will read weird as opposed to %attacker% being a name. Compare "Goblin inflicts..." vs. "Aldraic Swordbringer inflicts...." How does text-based games usually resolve such issues?

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • XNA Skinned Model - Keyframe.Bone out of range exception

    - by idlackage
    I'm getting an IndexOutOfRangeException on this line of AnimationPlayer.cs: boneTransforms[keyframe.Bone] = keyframe.Transform; I don't get what it's really referring to. The error happens when keyframe.Bone is 14, but I have no idea what that's supposed to mean. The 14th bone of my model? What would that even be? I read this thread, but nothing there seemed to work. I don't have many bones, stray edges/verts, unassigned verts, unparented/non-root bones, or bones with dots in the name. What else can I be missing? Thank you for any help!

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • New to CG shader programming, what program should I use to write and test them?

    - by Notbad
    I have started witting some shaders. First ones were fairly easy to write in notepad but now I need something with a bit more meat. I have checked rendermonnkey that seems to support CG but it is really old and don't know if it is a good option. On the other hand there exist this FX Composer 2.0 but it seems somthing that could really distract me from learning shaders because it seems a pretty deep program. Are there any other possibilities? There's a really nice alternative to write shaders named ShaderToy but just supports GLSL. Any information will be really welcomed. Thanks in advance.

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Checking whether a specific key was pressed in enchantJS

    - by MxyL
    I am using enchantJS and would like to use the letters and numbers as well as numpad on a keyboard to do different things (eg: hotkeys). From this page http://users.csc.calpoly.edu/~foaad/enchant/guide/playerInput.html By default, enchant.js provides input listeners for six buttons: UP, DOWN, LEFT, RIGHT, A, and B. By default, the directions are bound to the arrow keys. Any of the six buttons may also be bound to any key with an ASCII value. We’ll address that later. So enchant provides the ability to bind keys to different input such as up, down, left, right...but how can I simply check whether the D or X key was pressed, and if so, perform certain actions based on that event?

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • Particle System in XNA - cannot draw particle

    - by Dave Voyles
    I'm trying to implement a simple particle system in my XNA project. I'm going by RB Whitaker's tutorial, and it seems simple enough. I'm trying to draw particles within my menu screen. Below I've included the code which I think is applicable. I'm coming up with one error in my build, and it is stating that I need to create a new instance of the EmitterLocation from the particleEngine. When I hover over particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); it states that particleEngine is returning a null value. What could be causing this? /// <summary> /// Base class for screens that contain a menu of options. The user can /// move up and down to select an entry, or cancel to back out of the screen. /// </summary> abstract class MenuScreen : GameScreen ParticleEngine particleEngine; public void LoadContent(ContentManager content) { if (content == null) { content = new ContentManager(ScreenManager.Game.Services, "Content"); } base.LoadContent(); List<Texture2D> textures = new List<Texture2D>(); textures.Add(content.Load<Texture2D>(@"gfx/circle")); textures.Add(content.Load<Texture2D>(@"gfx/star")); textures.Add(content.Load<Texture2D>(@"gfx/diamond")); particleEngine = new ParticleEngine(textures, new Vector2(400, 240)); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); // Update each nested MenuEntry object. for (int i = 0; i < menuEntries.Count; i++) { bool isSelected = IsActive && (i == selectedEntry); menuEntries[i].Update(this, isSelected, gameTime); } particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); particleEngine.Update(); } public override void Draw(GameTime gameTime) { // make sure our entries are in the right place before we draw them UpdateMenuEntryLocations(); GraphicsDevice graphics = ScreenManager.GraphicsDevice; SpriteBatch spriteBatch = ScreenManager.SpriteBatch; SpriteFont font = ScreenManager.Font; spriteBatch.Begin(); // Draw stuff logic spriteBatch.End(); particleEngine.Draw(spriteBatch); }

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • Basic procedural generated content works, but how could I do the same in reverse?

    - by andrew
    My 2D world is made up of blocks. At the moment, I create a block and assign it a number between 1 and 4. The number assigned to the nth block is always the same (i.e if the player walks backwards or restarts the game.) and is generated in the function below. As shown here by this animation, the colours represent the number. function generate_data(n) math.randomseed(n) -- resets the random so that the 'random' number for n is always the same math.random() -- fixes lua random bug local no = math.random(4) --print(no, n) return no end Now I want to limit the next block's number - a block of 1 will always have a block 2 after it, while block 2 will either have a block 1,2 or 3 after it, etc. Before, all the blocks data was randomly generated, initially, and then saved. This data was then loaded and used instead of being randomly called. While working this way, I could specify what the next block would be easily and it would be saved for consistency. I have now removed this saving/loading in favour of procedural generation as I realised that save whiles would get very big after travelling. Back to the present. While travelling forward (to the right), it is easy to limit what the next blocks number will be. I can generate it at the same time as the other data. The problem is when travelling backwards (to the left) I can not think of a way to load the previous block so that it is always the same. Does anyone have any ideas on how I could sort this out?

    Read the article

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • Moving from XNA/C# to DirectX/C++ quite confused

    - by misiMe
    I made some game with XNA/C# for Windows Phone and Windows 8, since XNA is dead and Visual studio doesn't support it (I have to target Windows Phone 7.1 to build with XNA), I want to start learning something more "consistent in time" and improve my skills. I'm a little confused about the possibilities, because C++/DirectX alone seems difficult, so I found some high-level classes to help: DirectX Toolkit Cocos2D My questions are: What will happen when they will "die" like XNA? Is C++'s approces more "professional" than C#/XNA and why? Is C++'s approces more "portable"? Is C++'s approces more resistant in terms of time? Is there any consideration about DirectX TK and Cocos2D in terms of performance? I ask that because I found that every Game software house in my country looks for skilled C++ programmers.

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >