Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 440/1020 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • Floodfill algorithm for GO

    - by user1048606
    The floodfill algorithm is used in the bucket tool in MS paint and photoshop, but it can also be used for GO and minesweeper. http://en.wikipedia.org/wiki/Flood_fill In go you can capture groups of stones, this website portrays it with two stones. http://www.connectedglobe.com/mindy/cap6.html This is my floodfill method in Java, it is not capturing a group of stones and I have no idea why because to me it makes sense. public void floodfill(int turn, int col, int row){ for(int a = col; a<19; a++){ for(int b = row; b<19; b++){ if(turn == black){ if(stones[col][row] == white){ stones[col][row] = 0; floodfill(black, col-1, row); floodfill(black, col+1, row); floodfill(black, col, row-1); floodfill(black, col, row+1); } } } } } It searches up, down, left, right for all the stones on the board. If the stones are white it captures them by making them 0, which represents empty.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Trying to figure out SDL pixel manipulation?

    - by NoobScratcher
    Hello so I've found code that plots a pixel in an SDL Screen Surface : void putpixels(int x, int y, int color) { unsigned int *ptr = (unsigned int*)Screen->pixels; int lineoffset = y * (Screen->pitch / 4 ); ptr[lineoffset + x ] = color; } But I have no idea what its actually doing here this is my thoughts. You make an unsigned integer to hold the unsigned int version of pixels then you make another integer to hold the line offset and it equals to multiply by pitch which is then divided by 4 ... Now why am I dividing it by 4 and what is the pitch and why do I multiply it?? Why must I change the lineoffset and add it to the x value then equal it to colors? I'm soo confused.. ;/ I found this function here - http://sol.gfxile.net/gp/ch02.html

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Saving an interface instance into a Bundle

    - by user22241
    All I've have an interface that allows me to switch between different scenes in my Android game. When the home key is pressed, I am saving all of my states (score, sprite positions etc) into a Bundle. When re-launching, I am restoring all my states and all is OK - however, I can't figure out how to save my 'Scene', thus when I return, it always starts at the default screen which is the 'Main Menu'. How would I go about saving my 'Scene' (into a Bundle)? Code import android.view.MotionEvent; public interface Scene{ void render(); void updateLogic(); boolean onTouchEvent(MotionEvent event); } I assume the interface is the relevant piece of code which is why I've posted that snippet. I set my scene like so: ('options' is an object of my Options class which extends MainMenu (Another custom class) which, in turn implements the interface 'Scene') SceneManager.getInstance().setCurrentScene(options); //Current scene is optionscreen

    Read the article

  • Variables in static library are never initialized. Why?

    - by Coyote
    I have a bunch of variables that should be initialized then my game launches, but must of them are never initialized. Here is an example of the code: MyClass.h class MyClass : public BaseObject { DECLARE_CLASS_RTTI(MyClass, BaseObject); ... }; MyClass.cpp REGISTER_CLASS(MyClass) Where REGISTER_CLASS is a macro defined as follow #define REGISTER_CLASS(className)\ class __registryItem##className : public __registryItemBase {\ virtual className* Alloc(){ return NEW className(); }\ virtual BaseObject::RTTI& GetRTTI(){ return className::RTTI; }\ }\ \ const __registryItem##className __registeredItem##className(#className); and __registryItemBase looks like this: class __registryItemBase { __registryItemBase(const _string name):mName(name){ ClassRegistry::Register(this); } const _string mName; virtual BaseObject* Alloc() = 0; virtual BaseObject::RTTI& GetRTTI() = 0; } Now the code is similar to what I currently have and what I have works flawlessly, all the registered classes are registered to a ClassManager before main(...) is called. I'm able to instantiate and configure components from scripts and auto-register them to the right system etc... The problem arrises when I create a static library (currently for the iPhone, but I fear it will happen with android as well). In that case the code in the .cpp files is never registered. Why is the resulting code not executed when it is in the library while the same code in the program's binary is always executed? Bonus questions: For this to work in the static library, what should I do? Is there something I am missing? Do I need to pass a flag when building the lib? Should I create another structure and init all the __registeredItem##className using that structure?

    Read the article

  • Creating a level editor event system

    - by Vaughan Hilts
    I'm designing a level editor for game, and I'm trying to create sort of an 'event' system so I can chain together things. If anyone has used RPG Maker, I'm trying to do similar to their system. Right now, I have an 'EventTemplate' class and a bunch of sub-classed 'EventNodes' which basically just contain properties of their data. Orginally, the IAction and IExecute interface performed logic but it was moved into a DLL to share between the two projects. Question: How can I abstract logic from data in this case? Is my model wrong? Isn't cast typing expensive to parse these actions all the time? Should I write a 'Processor' class to execute these all? But then these actions that can do all sorts of things need to interact with all sorts of sub-systems.

    Read the article

  • What can make peaceful game successful?

    - by Miro
    Today, the most successful games are action games like FPS, RPG, MMORPG... I'd like to make peaceful game, but i don't know how to attract people. I can make good graphics, but that's not the main thing that makes people like game more that couple of minutes. The content is important. In game styles mentioned in beginning are main content fight, kill others, make from yourself predator/the most powerful creature/player in the game. But what content can attract people in peaceful game?

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • HedgeWar code confusion

    - by BluFire
    I looked at an open source project(HedgeWars) that was built using many programming languages such as C++ and Java. While I was looking through the code, I couldn't help noticing that all the math and physics were gone from the Java code. HedgeWars I imported the project file called "SDL-android-project" which was a sub folder to "android build" and project files. My question is where is all the math and physics inside the code? Do I have to look at the C++ code in order to see it? I think Hedgewars was originally programmed in C++ but the files are confusing be because of its size and the fact that it has several programming languages inside.

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • Rotations and Origins

    - by Theodore Enderby
    I was hoping someone could explain to me, or help me understand, the math behind rotations and origins. I'm working on a little top down space sim and I can rotate my ship just how I want it. Now, when I get my blasters going it'd be nice if they shared the same rotation. Here's a picture. and here's some code! blast.X = ship.X+5; blast.Y = ship.Y; blast.RotationAngle = ship.RotationAngle; blast.Origin = new Vector2(ship.Origin.X,ship.Origin.Y); I add five so the sprite adds up when facing right. I tried adding five to the blast origin but no go. Any help is much appreciated

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Given a RGB color x, how to find the most contrasting color y?

    - by Arthur Wulf White
    I have to mark a certain item in a way that will make it stick-out in the background. I need it to be surrounded with the color that contrasts the background as much as possible so it will pop out and easily noticeable by the player. Lets say I know the background is color 'x', how do I find 'y' such that it will be very contrasting to 'x' and easy to notice in a background where 'x' is a dominant color? I first thought about inverting color 'x' and then I noticed that when 'x' is a medium shade of gray, if I invert 'x' to get 'y', then 'y' is also a medium shade of gray which does not work.

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • Best way to do large XNA animations?

    - by Harold
    What's the best way to have large animations in XNA 4.0? I have created a spritesheet with the sprite being 250x400 (more of an image than a sprite but hey ho) and there are approximately 45 frames in the animation. This causes problems for XNA as it says that the maximum filesize for Reach is 2048. I'd rather not change to hidef as I heard that means that your game is less compatible with some computers and systems so does anyone have any idea what the best thing I could do is? The only thing I could come up with is to have a list of textures to flick through but that's not ideal.

    Read the article

  • How can I easily create cloud texture maps?

    - by EdwardTeach
    I am making 3d planets in my game; these will be viewed as "globes". Some of them will need cloud layers. I looked at various Blender tutorials for creating "earth", and for their cloud layers they use earth cloud maps from NASA. However I will be creating a fictional universe with many procedurally-generated planets. So I would like to use many variations. I'm hoping there's a way to procedurally generate cloud maps such as the NASA link. I will also need to create gas giants, so I will also need other kinds of cloud texture maps. If that is too difficult, I could fall back to creating several variations of cloud maps. For example, 3 for earth-like, 3 for gas giants, etc. So how do I statically create or programmatically generate such cloud maps?

    Read the article

  • What are good JS libraries for game dev?

    - by acidzombie24
    If I decide to write a simple game both text and graphical (2d) what libraries would I use? (Assume we are using a HTML5 compatible browser) The main things I can think of Rendering text on screen Animating sprites (using images/css) Input (capturing the arrow keys and getting relative mouse positions) Perhaps some preloading resource or dynamically loading resources and choosing order Sound (but I am unsure how important this will be to me at first). Perhaps with mixing and chaining sounds or looping forever until stop. Networking (low priority) to connect a user to another or to continuously GET data without multiple request (I know this exist but I don't know how easy it is to setup or use. But this isn't important to me. Its for the question).

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • Multiplayer Network Game - Interpolation and Frame Rate

    - by J.C.
    Consider the following scenario: Let's say, for sake of example and simplicity, that you have an authoritative game server that sends state to its clients every 45ms. The clients are interpolating state with an interpolation delay of 100 ms. Finally, the clients are rendering a new frame every 15ms. When state is updated on the client, the client time is set from the incoming state update. Each time a frame renders, we take the render time (client time - interpolation delay) and identify a previous and target state to interpolate from. To calculate the interpolation amount/factor, we take the difference of the render time and previous state time and divide by the difference of the target state and previous state times: var factor = ((renderTime - previousStateTime) / (targetStateTime - previousStateTime)) Problem: In the example above, we are effectively displaying the same interpolated state for 3 frames before we collected the next server update and a new client (render) time is set. The rendering is mostly smooth, but there is a dash of jaggedness to it. Question: Given the example above, I'd like to think that the interpolation amount/factor should increase with each frame render to smooth out the movement. Should this be considered and, if so, what is the best way to achieve this given the information from above?

    Read the article

  • Who should map physical keys to abstract keys?

    - by Paul Manta
    How do you bridge the gap between the library's low-level event system and your engine's high-level event system? (I'm not necessarily talking about key events, but also about quit events.) At the top level of my event system, I send out KeyPressedEvents, KeyRelesedEvents and others of this kind. These high-level events only contain the abstract values of the keys (they don't say that Space way pressed, but that the JumpKey was pressed, for example). Whose responsibility should it be to map the "JumpKey" to an actual key on the keyboard?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >