Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 481/1295 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • What program should i use for Ludum Dare?

    - by mFontoura
    I want to participate for the first time on Ludum Dare, but i'm not confortable yet with a language to pick one for making a game on a weekend. So i was looking for a program 'gamemaker' style, just to make something for LD. I was going for Construct 2, but i use Linux and they don't have a linux version. So the alternative i use is Stencyl, witch is great and probably is what i'm going to use. However, i wanted to know if there is something similar and better for Linux. Also, if i get a computer with Win8, is it worth the trouble for Construct 2?

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Is there a way to legally create a game mod?

    - by Rodrigo Guedes
    Some questions about it: If I create a funny version of a copyrighted game and sell it (crediting the original developers) would it be considered a parody or would I need to pay royalties? If I create a game mod for my own personal use would it be legal? What if I gave it for free to a friend? Is there a general rule about it or it depends on the developer will? P.S.: I'm not talking about cloning games like this question. It's all about a game clearly based on another. Something like "GTA Gotham City" ;) EDIT: This picture that I found over the internet illustrate what I'm talking about: Just in case I was not clear: I never created a mod game. I was just wondering if it would be legally possible before trying to do it. I'm not apologizing piracy. I pay dearly for my games (you guys have no idea how expensive games are in Brazil due to taxes). Once more I say that the question is not about cloning. Cloning is copy something and try to make your version look like a brand new product. Mods are intended to make reference to one or more of its source. I'm not sure if it can be done legally (if I knew I wasn't asking) but I'm sure this question is not a duplicate. Even so, I trust in the moderators and if they close my question I will not be offended - at least I had an opportunity to explain myself and got 1 good answer (by the time I write this, maybe some more will be given later).

    Read the article

  • What does SetTextureStage(0, D3DTSS_COLORARG2, 0) in DirectX mean?

    - by Vite Falcon
    I'm trying to convert some DirectX code to Ogre3D and was wondering what the following translates to: pDev->SetTextureStage(0, D3DTSS_TEXCOORDINDEX, 0) pDev->SetTextureStage(0, D3DTSS_COLORARG1, D3DTA_TEXTURE) pDev->SetTextureStage(0, D3DTSS_COLOROP, D3DTOP_MODULATE) pDev->SetTextureStage(0, D3DTSS_COLORARG2, 0) What is the modulation operation happening here? Is the texture getting modulated with the background? Or is it getting zeroed? I've tried searching for what this means and unfortunately I haven't come across anything meaningful. Any help to shed light on this matter will be much appreciated.

    Read the article

  • How to draw textures on a model

    - by marc wellman
    The following code is a complete XNA 3.1 program almost unaltered to that code skeleton Visual Studio is creating when creating a new project. The only things I have changed are imported a .x model to the content folder of the VS solution. (the model is a simple square with a texture spanning over it - made in Google Sketchup and exported with several .x exporters) in the Load() method I am loading the .x model into the game. The Draw() method uses a BasicEffect to render the model. Except these three things I haven't added any code. Why does the model does not show the texture ? What can I do to make the texture visible ? This is the texture file (a standard SketchUp texture from the palette): And this is what my program looks like - as you can see: No texture! Find below the complete source code of the program AND the complete .x file: namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } Model newModel; /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: usse this.Content to load your game content here newModel = Content.Load<Model>(@"aau3d"); foreach (ModelMesh mesh in newModel.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { meshPart.Effect = new BasicEffect(this.GraphicsDevice, null); } } } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { if (newModel != null) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[newModel.Bones.Count]; newModel.CopyAbsoluteBoneTransformsTo(transforms); foreach (ModelMesh mesh in newModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.TextureEnabled = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(new Vector3(0, 0, 0)); effect.View = Matrix.CreateLookAt(new Vector3(200, 1000, 200), Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 0.75f, 1.0f, 10000.0f); } mesh.Draw(); } } base.Draw(gameTime); } } } This is the model I am using (.x): xof 0303txt 0032 // SketchUp 6 -> DirectX (c)2008 edecadoudal, supports: faces, normals and textures Material Default_Material{ 1.0;1.0;1.0;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; } Material _Groundcover_RiverRock_4inch_{ 0.568627450980392;0.494117647058824;0.427450980392157;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; TextureFilename { "aau3d.xGroundcover_RiverRock_4inch.jpg"; } } Mesh mesh_0{ 4; -81.6535;0.0000;74.8031;, -0.0000;0.0000;0.0000;, -81.6535;0.0000;0.0000;, -0.0000;0.0000;74.8031;; 2; 3;0,1,2, 3;1,0,3;; MeshMaterialList { 2; 2; 1, 1; { Default_Material } { _Groundcover_RiverRock_4inch_ } } MeshTextureCoords { 4; -2.1168,-3.4022; 1.0000,-0.0000; 1.0000,-3.4022; -2.1168,-0.0000;; } MeshNormals { 4; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000;; 2; 3;0,1,2; 3;1,0,3;; } }

    Read the article

  • Stop map from scrolling but let player still move?

    - by ChocoMan
    I have a basic method of scrolling around on a map (moving the map instead of the player), but at when the player gets to a certain proximity to the edge, how do you stop the map from scrolling, but still allow the player to move around until it is away from that proximity? I'm not looking for any code. Just a suggestion so that I can implement it myself. I can see it visually (creating 4 boxed intersecting boundaries for the player to enter), but not sure how to come about stopping and resuming the scrolling of the map.

    Read the article

  • How to refactor and improve this XNA mouse input code?

    - by Andrew Price
    Currently I have something like this: public bool IsLeftMouseButtonDown() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Pressed; } public bool IsLeftMouseButtonPressed() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonUp() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonReleased() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Pressed; } This is fine. In fact, I kind of like it. However, I'd hate to have to repeat this same code five times (for right, middle, X1, X2). Is there any way to pass in the button I want to the function so I could have something like this? public bool IsMouseButtonDown(MouseButton button) { return currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonPressed(MouseButton button) { return currentMouseState.IsPressed(button) && !previousMouseState.IsPressed(button); } public bool IsMouseButtonUp(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonReleased(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } I suppose I could create some custom enumeration and switch through it in each function, but I'd like to first see if there is a built-in solution or a better way.. Thanks!

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • How display path ball will bounce?

    - by boolean
    I'm trying to figure out a way to show the path a ball will travel, so that the player can line up a shot before they fire the ball. I can't think of a way to calculate this path in advance and show it to the player, especially if it involves collision detection. At first I thought I would run the game at a super high speed for one update, plot the path with some dotted lines where the ball bounced, and then in the next frame hide the 'tracer' ball. This seems to have two issues - Calculating collision detection without actually updating the frames and collision detection getting less reliable at high speeds. If they were straight lines I think I could figure this out in a while loop, but trying to take into account the speed of the ball, the curve of the path, the reflecting from other objects..it all seems a bit much. I'm not looking for any code and this isn't a platform specific question, more just help trying to figure out conceptually how this would work. Can this be done? Are there techniques to achieve this?

    Read the article

  • Creating a newspaper that effects the game's economy?

    - by zardon
    I am writing a game in Objective C/cocos2d where a newspaper is a central part of what controls or rather effects the game's world economy as well as what a city might do (such as increase X, reduce Y) The newspaper is a bit like a "Chance card" in Monopoly, it has an effect on something. My question is, what is the best way to do write a newspaper that has both a random and specific effect within the game. Would the best strategy be to write out all the things a newspaper can affect, a PLIST of headlines (with placeholders). I think Tiny Tower uses a PLIST of events and it randomly picks an event, but I'm not sure how it actually parses it because certain events do different things. But then how do I parse all the scenarios that a newspaper can deliver? A big switch statement seems very long and complicated to do. I am wondering if there is a simpler way to handle this kind of thing. Related to this is that there might be no news that day and I'm not sure what the newspaper should display, should it just display the last headline? So, in summary. 1) A newspaper generates a headline, it affects different things, such as the world economy, prices, how city reacts 2) I need the newspaper to generate headlines (although there may be days when there are no headlines at all), but I am not sure how to parse it without using a big-ass switch statement. Thanks in advance.

    Read the article

  • Cocos2d copied actions not responding?

    - by Stephen
    I am running an animation on 2 sprites like so: -(void) startFootballAnimation { CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [self runAction:repeat]; [secondFootball runAction:[[repeat copy] autorelease]]; } The problem I am having is I call this method: - (void) slowAnimation { [spiral setDuration:[spiral duration] + 0.01]; } and it only slows down the first sprites animation and not the second one. Do I need to do something different with copied actions to get them to react to the slowing of the animation?

    Read the article

  • Android - Rendering HUD View to SurfaceView

    - by Jon
    I have developed a relatively simple game in android, to get my head around it all, and on the back of it developed a crude game engine (in the loosest sense!). I use a SurfaceView and canvas (no OpenGL) - I'll cross that bridge another time! I have implemented a game HUD, title screens etc. by overlaying standard Android view widgets over my SurfaceView. This all works reasonably well maintaining an acceptable frame-rate, but it is a simple game with not a lot happening on or off screen. What I am wondering now is whether one could (and whether one would get any advantage by) drawing all my views to the one SurfaceView, all controlled by the main game thread. At the moment I have handlers flinging messages around and runOnUiThreads here, there and everywhere. Quite cumbersome. Any thoughts on this would be much appreciated (before I perhaps waste time trying to do it!)

    Read the article

  • Tool for creating complex paths?

    - by TerryB
    I want to create some fairly complex predefined paths for my AI sprites to follow. I'll need to use curves, splines etc to get the effect I want. Is there a drawing tool out there that will allow me to draw such curves, "mesh" them by placing lots of points along them at some defined density and then output the coordinates of all of those points for me? I could write this tool myself but hopefully one of the drawing packages can do this? Cheers!

    Read the article

  • Game World Design [on hold]

    - by GameDev
    I have one doubt about world game developing. I want to do a kind of platform game mixed with RPG (Side Scroll). What's the best to draw the world, - Draw everything than use the camera to move around the world - Draw just what you see as the player moves draw the new stuff. I'm new at this and didn't had any course for it. So if anyone can help me thanks :) PS: Any recommendation to learning game concept, like drawing world theory, play etc.. (not code and i want to 2D and i only see books for 3D stuff)

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • How was collision detection handled in The Legend of Zelda: A Link to the Past?

    - by Restart
    I would like to know how the collision detection was done in The Legend of Zelda: A Link To The Past. The game is 16x16 tile based, so how did they do the tiles where only a quarter or half of the tile is occupied? Did they use a smaller grid for collision detection like 8x8 tiles, so four of them make one 16x16 tile of the texture grid? But then, they also have true half tiles which are diagonally cut and the corners of the tiles seem to be round or something. If Link walks into tiles corner he can keep on walking and automatically moves around it's corner. How is that done? I hope someone can help me out here.

    Read the article

  • New way of integrating Openfeint in Cocos2d-x 0.12.0

    - by Ef Es
    I am trying to implement OpenFeint for Android in my cocos2d-x project. My approach so far has been creating a button that calls a static java method in class Bridge using jnihelper functions (jnihelper only accepts statics). Bridge has one singleton attribute of type OFAndroid, that is the class dynamically calling the Openfeint Api methods, and every method in the bridge just forwards it to the OFAndroid object. What I am trying to do now is to initialize the openfeint libraries in the main java class that is the one calling the static C++ libraries. My problem right now is that the initializing function void com.openfeint.api.OpenFeint.initialize(Context ctx, OpenFeintSettings settings, OpenFeintDelegate delegate) is not accepting the context parameter that I am giving him, which is a "this" reference to the main class. Main class extends from Cocos2dxActivity but I don't have any other that extends from Application. Any suggestions on fixing it or how to improve the architecture? EDIT: I am trying a new solution. Make the bridge class into an Application child, is called from Main object, initializes OpenFeint when created and it can call the OpenFeint functions instead of needing an additional class. The problem is I still get the error. 03-30 14:39:22.661: E/AndroidRuntime(9029): Caused by: java.lang.NullPointerException 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.content.ContextWrapper.getPackageManager(ContextWrapper.java:85) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.validateManifest(OpenFeintInternal.java:885) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initializeWithoutLoggingIn(OpenFeintInternal.java:829) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initialize(OpenFeintInternal.java:852) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.api.OpenFeint.initialize(OpenFeint.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.NuroFeint.onCreate(NuroFeint.java:23) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.FastFish.onCreate(FastFish.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1069) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2751)

    Read the article

  • Modern Shader Book?

    - by Michael Stum
    I'm interested in learning about Shaders: What are they, when/for what would I use them, and how to use them. (Specifically I'm interested in Water and Bloom effects, but I know close to 0 about Shaders, so I need a general introduction). I saw a lot of books that are a couple of years old, so I don't know if they still apply. I'm targeting XNA 4.0 at the moment (which I believe means HLSL Shaders for Shader Model 4.0), but anything that generally targets DirectX 11 and OpenGL 4 is helpful I guess.

    Read the article

  • How can I test if my rotated rectangle intersects a corner?

    - by Raven Dreamer
    I have a square, tile-based collision map. To check if one of my (square) entities is colliding, I get the vertices of the 4 corners, and test those 4 points against my collision map. If none of those points are intersecting, I know I'm good to move to the new position. I'd like to allow entities to rotate. I can still calculate the 4 corners of the square, but once you factor in rotation, those 4 corners alone don't seem to be enough information to determine if the entity is trying to move to a valid state. For example: In the picture below, black is unwalkable terrain, and red is the player's hitbox. The left scenario is allowed because the 4 corners of the red square are not in the black terrain. The right scenario would also be (incorrectly) allowed, because the player, cleverly turned at a 45* angle, has its corners in valid spaces, even if it is (quite literally) cutting the corner. How can I detect scenarios similar to the situation on the right?

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • How to design a character animation system?

    - by Andrea Benedetti
    I'm searching for suggestions and resources on the possible ways to design a character animation system. I mean a system built on top of the graphics engine (as graphics engine I use Ogre3D, that provide an animation layer), and in strict contact with the logic of the game. It's for a sports title, so the question is not easy. Edit: What I'm searching for are suggestions and resources about the action state mechines (or animation state machines), that is build on top of the animation pipeline already provided by the graphics engine. So, a state-driver animation interface for use by virtually all higher-level game code.

    Read the article

  • Better way to generate enemies of different sub-classes

    - by KDiTraglia
    So lets pretend I have an enemy class that has some generic implementation and inheriting from it I have all the specific enemies of my game. There are points in my code that I need to check whether an enemy is a specific type, but in Java I have found no easier way than this monstrosity... //Must be a better way to do this if ( enemy.class.isAssignableFrom(Ninja.class) ) { ... } My partner on the project saw these and changed them to use an enum system instead public class Ninja extends Enemy { //EnemyType is an enum containing all our enemy types public EnemyType = EnemyTypes.NINJA; } if (enemy.EnemyType = EnemyTypes.NINJA) { ... } I also have found no way to generate enemies on varying probabilities besides this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { enemy = createEnemy(types.getEnemyType()); break; } } private static Enemy createEnemy(EnemyType type) { switch (type) { case NINJA: return new Ninja(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case GORILLA: return new Gorilla(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case TREX: return new TRex(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); //etc } return null } I know java is a little weak at dynamic object creation, but is there a better way to implement this in a way such like this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { //Change enemyTypes to hold the classes of the enemies I can spawn enemy = types.getEnemyType().class.newInstance() break; } } Is the above possible? How would I declare enemyTypes to hold the classes if so? Everything I have tried so far as generated compile errors and general frustration, but I figured I might ask here before I completely give up to the huge mass that is the createEveryEnemy() method. All the enemies do inherit from the Enemy class (which is what the enemy variable is declared as). Also is there a better way to check which type a particular enemy that is shorter than enemy.class.isAssignableFrom(Ninja.class)? I'd like to ditch the enums entirely if possible, since they seem repetitive when the class name itself holds that information.

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >