Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 493/1157 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Appropriate level of granularity for component-based architecture

    - by Jon Purdy
    I'm working on a game with a component-based architecture. An Entity owns a set of Component instances, each of which has a set of Slot instances with which to store, send, and receive values. Factory functions such as Player produce entities with the required components and slot connections. I'm trying to determine the best level of granularity for components. For example, right now Position, Velocity, and Acceleration are all separate components, connected in series. Velocity and Acceleration could easily be rewritten into a uniform Delta component, or Position, Velocity, and Acceleration could be combined alongside such components as Friction and Gravity into a monolithic Physics component. Should a component have the smallest responsibility possible (at the cost of lots of interconnectivity) or should related components be combined into monolithic ones (at the cost of flexibility)? I'm leaning toward the former, but I could use a second opinion.

    Read the article

  • Breakout clone, how to handle/design for collision detection/physics between objects?

    - by Zolomon
    I'm working on a breakout clone, and I wish to create some realistic physics effects for collision - angles on the paddle should allow the ball to bounce, as well as doing curve balls etc. I could use per-pixel based collision detection, but then I thought it might be easier with line/circle intersection testing. So, then I naturally consider making a polygon class for the line-based objects and use the built-in circle class for the circular objects. That sounds like an OK approach, right? And then just check for collision using the specified algorithm based on the objects that might be within each other's range?

    Read the article

  • Method for spawning enemies according to player score and game time

    - by Sun
    I'm making a top-down shooter and want to scale the difficulty of the game according to what the score is and how much time has Passed. Along with this, I want to spawn enemies in different patterns and increase the intervals at which these enemies are shown. I'm going for a similar effect to Geometry wars. However, I can think of a to do this other than have multiple if-else statments, e.g. : if (score > 1000) { //spawn x amount if enemies } else if (score > 10000) { //spawn x amount of enemy type 1 & 2 } else if (score > 15000) { //spawn x amount of enemy type 1 & 2 & 3 } else if (score > 25000) { //spawn x amount of enemy type 1 & 2 & 3 //create patterns with enemies } ...etc What would be a better method of spawning enemies as I have described?

    Read the article

  • Spritebatch drawing sprite with jagged borders

    - by Mutoh
    Alright, I've been on the making of a sprite class and a sprite sheet manager, but have come across this problem. Pretty much, the project is acting like so; for example: Let's take this .png image, with a transparent background. Note how it has alpha-transparent pixels around it in the lineart. Now, in the latter link's image, in the left (with CornflowerBlue background) it is shown the image drawn in another project (let's call it "Project1") with a simpler sprite class - there, it works. The right (with Purple background for differentiating) shows it drawn with a different class in "Project2" - where the problem manifests itself. This is the Sprite class of Project1: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace WindowsGame2 { class Sprite { Vector2 pos = new Vector2(0, 0); Texture2D image; Rectangle size; float scale = 1.0f; // --- public float X { get { return pos.X; } set { pos.X = value; } } public float Y { get { return pos.Y; } set { pos.Y = value; } } public float Width { get { return size.Width; } } public float Height { get { return size.Height; } } public float Scale { get { return scale; } set { if (value < 0) value = 0; scale = value; if (image != null) { size.Width = (int)(image.Width * scale); size.Height = (int)(image.Height * scale); } } } // --- public void Load(ContentManager Man, string filename) { image = Man.Load<Texture2D>(filename); size = new Rectangle( 0, 0, (int)(image.Width * scale), (int)(image.Height * scale) ); } public void Become(Texture2D frame) { image = frame; size = new Rectangle( 0, 0, (int)(image.Width * scale), (int)(image.Height * scale) ); } public void Draw(SpriteBatch Desenhista) { // Desenhista.Draw(image, pos, Color.White); Desenhista.Draw( image, pos, new Rectangle( 0, 0, image.Width, image.Height ), Color.White, 0.0f, Vector2.Zero, scale, SpriteEffects.None, 0 ); } } } And this is the code in Project2, a rewritten, pretty much, version of the previous class. In this one I added sprite sheet managing and, in particular, removed Load and Become, to allow for static resources and only actual Sprites to be instantiated. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace Mobby_s_Adventure { // Actually, I might desconsider this, and instead use static AnimationLocation[] and instanciated ID and Frame; // For determining the starting frame of an animation in a sheet and being able to iterate through // the Rectangles vector of the Sheet; class AnimationLocation { public int Location; public int FrameCount; // --- public AnimationLocation(int StartingRow, int StartingColumn, int SheetWidth, int NumberOfFrames) { Location = (StartingRow * SheetWidth) + StartingColumn; FrameCount = NumberOfFrames; } public AnimationLocation(int PositionInSheet, int NumberOfFrames) { Location = PositionInSheet; FrameCount = NumberOfFrames; } public static int CalculatePosition(int StartingRow, int StartingColumn, SheetManager Sheet) { return ((StartingRow * Sheet.Width) + StartingColumn); } } class Sprite { // The general stuff; protected SheetManager Sheet; protected Vector2 Position; public Vector2 Axis; protected Color _Tint; public float Angle; public float Scale; protected SpriteEffects _Effect; // --- // protected AnimationManager Animation; // For managing the animations; protected AnimationLocation[] Animation; public int AnimationID; protected int Frame; // --- // Properties for easy accessing of the position of the sprite; public float X { get { return Position.X; } set { Position.X = Axis.X + value; } } public float Y { get { return Position.Y; } set { Position.Y = Axis.Y + value; } } // --- // Properties for knowing the size of the sprite's frames public float Width { get { return Sheet.FrameWidth * Scale; } } public float Height { get { return Sheet.FrameHeight * Scale; } } // --- // Properties for more stuff; public Color Tint { set { _Tint = value; } } public SpriteEffects Effect { set { _Effect = value; } } public int FrameID { get { return Frame; } set { if (value >= (Animation[AnimationID].FrameCount)) value = 0; Frame = value; } } // --- // The only things that will be constantly modified will be AnimationID and FrameID, anything else only // occasionally; public Sprite(SheetManager SpriteSheet, AnimationLocation[] Animations, Vector2 Location, Nullable<Vector2> Origin = null) { // Assign the sprite's sprite sheet; // (Passed by reference! To allow STATIC sheets!) Sheet = SpriteSheet; // Define the animations that the sprite has available; // (Passed by reference! To allow STATIC animation boundaries!) Animation = Animations; // Defaulting some numerical values; Angle = 0.0f; Scale = 1.0f; _Tint = Color.White; _Effect = SpriteEffects.None; // If the user wants a default Axis, it is set in the middle of the frame; if (Origin != null) Axis = Origin.Value; else Axis = new Vector2( Sheet.FrameWidth / 2, Sheet.FrameHeight / 2 ); // Now that we have the axis, we can set the position with no worries; X = Location.X; Y = Location.Y; } // Simply put, draw the sprite with all its characteristics; public void Draw(SpriteBatch Drafter) { Drafter.Draw( Sheet.Texture, Position, Sheet.Rectangles[Animation[AnimationID].Location + FrameID], // Find the rectangle which frames the wanted image; _Tint, Angle, Axis, Scale, _Effect, 0.0f ); } } } And, in any case, this is the SheetManager class found in the previous code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace Mobby_s_Adventure { class SheetManager { protected Texture2D SpriteSheet; // For storing the sprite sheet; // Number of rows and frames in each row in the SpriteSheet; protected int NumberOfRows; protected int NumberOfColumns; // Size of a single frame; protected int _FrameWidth; protected int _FrameHeight; public Rectangle[] Rectangles; // For storing each frame; // --- public int Width { get { return NumberOfColumns; } } public int Height { get { return NumberOfRows; } } // --- public int FrameWidth { get { return _FrameWidth; } } public int FrameHeight { get { return _FrameHeight; } } // --- public Texture2D Texture { get { return SpriteSheet; } } // --- public SheetManager (Texture2D Texture, int Rows, int FramesInEachRow) { // Normal assigning SpriteSheet = Texture; NumberOfRows = Rows; NumberOfColumns = FramesInEachRow; _FrameHeight = Texture.Height / NumberOfRows; _FrameWidth = Texture.Width / NumberOfColumns; // Framing everything Rectangles = new Rectangle[NumberOfRows * NumberOfColumns]; int ID = 0; for (int i = 0; i < NumberOfRows; i++) { for (int j = 0; j < NumberOfColumns; j++) { Rectangles[ID] = new Rectangle ( _FrameWidth * j, _FrameHeight * i, _FrameWidth, _FrameHeight ); ID++; } } } public SheetManager (Texture2D Texture, int NumberOfFrames): this(Texture, 1, NumberOfFrames) { } } } For even more comprehending, if needed, here is how the main code looks like (it's just messing with the class' capacities, nothing actually; the result is a disembodied feet walking in place animation on the top-left of the screen and a static axe nearby): using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using System.Threading; namespace Mobby_s_Adventure { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; static List<Sprite> ToDraw; static Texture2D AxeSheet; static Texture2D FeetSheet; static SheetManager Axe; static Sprite Jojora; static AnimationLocation[] Hack = new AnimationLocation[1]; static SheetManager Feet; static Sprite Mutoh; static AnimationLocation[] FeetAnimations = new AnimationLocation[2]; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; this.TargetElapsedTime = TimeSpan.FromMilliseconds(100); this.IsFixedTimeStep = true; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // Loading logic ToDraw = new List<Sprite>(); AxeSheet = Content.Load<Texture2D>("Sheet"); FeetSheet = Content.Load<Texture2D>("Feet Sheet"); Axe = new SheetManager(AxeSheet, 1); Hack[0] = new AnimationLocation(0, 1); Jojora = new Sprite(Axe, Hack, new Vector2(100, 100), new Vector2(5, 55)); Jojora.AnimationID = 0; Jojora.FrameID = 0; Feet = new SheetManager(FeetSheet, 8); FeetAnimations[0] = new AnimationLocation(1, 7); FeetAnimations[1] = new AnimationLocation(0, 1); Mutoh = new Sprite(Feet, FeetAnimations, new Vector2(0, 0)); Mutoh.AnimationID = 0; Mutoh.FrameID = 0; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // Update logic Mutoh.FrameID++; ToDraw.Add(Mutoh); ToDraw.Add(Jojora); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Purple); // Drawing logic spriteBatch.Begin(); foreach (Sprite Element in ToDraw) { Element.Draw(spriteBatch); } spriteBatch.Draw(Content.Load<Texture2D>("Sheet"), new Rectangle(50, 50, 55, 60), Color.White); spriteBatch.End(); base.Draw(gameTime); } } } Please help me find out what I'm overlooking! One thing that I have noticed and could aid is that, if inserted the equivalent of this code spriteBatch.Draw( Content.Load<Texture2D>("Image Location"), new Rectangle(X, Y, images width, height), Color.White ); in Project2's Draw(GameTime) of the main loop, it works. EDIT Ok, even if the matter remains unsolved, I have made some more progress! As you see, I managed to get the two kinds of rendering in the same project (the aforementioned Project2, with the more complex Sprite class). This was achieved by adding the following code to Draw(GameTime): protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Purple); // Drawing logic spriteBatch.Begin(); foreach (Sprite Element in ToDraw) { Element.Draw(spriteBatch); } // Starting here spriteBatch.Draw( Axe.Texture, new Vector2(65, 100), new Rectangle ( 0, 0, Axe.FrameWidth, Axe.FrameHeight ), Color.White, 0.0f, new Vector2(0, 0), 1.0f, SpriteEffects.None, 0.0f ); // Ending here spriteBatch.End(); base.Draw(gameTime); } (Supposing that Axe is the SheetManager containing the texture, sorry if the "jargons" of my code confuse you :s) Thus, I have noticed that the problem is within the Sprite class. But I only get more clueless, because even after modifying its Draw function to this: public void Draw(SpriteBatch Drafter) { /*Drafter.Draw( Sheet.Texture, Position, Sheet.Rectangles[Animation[AnimationID].Location + FrameID], // Find the rectangle which frames the wanted image; _Tint, Angle, Axis, Scale, _Effect, 0.0f );*/ Drafter.Draw( Sheet.Texture, Position, new Rectangle( 0, 0, Sheet.FrameWidth, Sheet.FrameHeight ), Color.White, 0.0f, Vector2.Zero, Scale, SpriteEffects.None, 0 ); } to make it as simple as the patch of code that works, it still draws the sprite jaggedly!

    Read the article

  • Physics System ignores collision in some rare cases

    - by Gajoo
    I've been developing a simple physics engine for my game. since the game physics is very simple I've decided to increase accuracy a little bit. Instead of formal integration methods like fourier or RK4, I'm directly computing the results after delta time "dt". based on the very first laws of physics : dx = 0.5 * a * dt^2 + v0 * dt dv = a * dt where a is acceleration and v0 is object's previous velocity. Also to handle collisions I've used a method which is somehow different from those I've seen so far. I'm detecting all the collision in the given time frame, stepping the world forward to the nearest collision, resolving it and again check for possible collisions. As I said the world consist of very simple objects, so I'm not loosing any performance due to multiple collision checking. First I'm checking if the ball collides with any walls around it (which is working perfectly) and then I'm checking if it collides with the edges of the walls (yellow points in the picture). the algorithm seems to work without any problem except some rare cases, in which the collision with points are ignored. I've tested everything and all the variables seem to be what they should but after leaving the system work for a minute or two the system the ball passes through one of those points. Here is collision portion of my code, hopefully one of you guys can give me a hint where to look for a potential bug! void PhysicalWorld::checkForPointCollision(Vec2 acceleration, PhysicsComponent& ball, Vec2& collisionNormal, float& collisionTime, Vec2 target) { // this function checks if there will be any collision between a circle and a point // ball contains informations about the circle (it's current velocity, position and radius) // collisionNormal is an output variable // collisionTime is also an output varialbe // target is the point I want to check for collisions Vec2 V = ball.mVelocity; Vec2 A = acceleration; Vec2 P = ball.mPosition - target; float wallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; float r = ball.mRadius / (mMap->getWallWidth() + mMap->getHallWidth()); // r is ball radius scaled to match actual rendered object. if (A.any()) // todo : I need to first correctly solve the collisions in case there is no acceleration return; if (V.any()) // if object is not moving there will be no collisions! { float D = P.x * V.y - P.y * V.x; float Delta = r*r*V.length2() - D*D; if(Delta < eps) return; Delta = sqrt(Delta); float sgnvy = V.y > 0 ? 1: (V.y < 0?-1:0); Vec2 c1(( D*V.y+sgnvy*V.x*Delta) / V.length2(), (-D*V.x+fabs(V.y)*Delta) / V.length2()); Vec2 c2(( D*V.y-sgnvy*V.x*Delta) / V.length2(), (-D*V.x-fabs(V.y)*Delta) / V.length2()); float t1 = (c1.x - P.x) / V.x; float t2 = (c2.x - P.x) / V.x; if(t1 > eps && t1 <= collisionTime) { collisionTime = t1; collisionNormal = c1; } if(t2 > eps && t2 <= collisionTime) { collisionTime = t2; collisionNormal = c2; } } } // this function should step the world forward by dt. it doesn't check for collision of any two balls (components) // it just checks if there is a collision between the current component and 4 points forming a rectangle around it. void PhysicalWorld::step(float dt) { for (unsigned i=0;i<mObjects.size();i++) { PhysicsComponent &current = *mObjects[i]; Vec2 acceleration = current.mForces * current.mInvMass; float rt=dt; // stores how much more the world should advance while(rt > eps) { float collisionTime = rt; Vec2 collisionNormal = Vec2(0,0); float halfWallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; // we check if there is any collision with any of those 4 points around the ball // if there is a collision both collisionNormal and collisionTime variables will change // after these functions collisionTime will be exactly the value of nearest collision (if any) // and if there was, collisionNormal will report in which direction the ball should return. checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); // either if there is a collision or if there is not we step the forward since we are sure there will be no collision before collisionTime current.mPosition += collisionTime * (collisionTime * acceleration * 0.5 + current.mVelocity); current.mVelocity += collisionTime * acceleration; // if the ball collided with anything collisionNormal should be at least none zero in one of it's axis if (collisionNormal.any()) { collisionNormal *= Dot(collisionNormal, current.mVelocity) / collisionNormal.length2(); current.mVelocity -= 2 * collisionNormal; // simply reverse velocity along collision normal direction } rt -= collisionTime; } // reset all forces for current object so it'll be ready for later game event current.mForces.zero(); } }

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • Creating models in 3ds max and exporting as .x for XNA

    - by Sweta Dwivedi
    I have created a few models in 3DS max which contains textures, geometry and animations . .however .fbx doesnt really support textures.. So im planning to use .x format.. I have seen a few converters in pandasoft but once i unzip the file and place the .dle file in the plugins folder of 3D max gives an error saying failed to initialize.. Is there any way to convert my .max models into .x format ? ? I dont know blender so that isnt an option. . I'm currently using 3ds max 2013 After adding the .3DS object content importer. . i get the following error:

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • Component-based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • How do I draw anti-aliased holes in a bitmap

    - by gyozo kudor
    I have an artillery game (hobby-learning project) and when the projectile hits it leaves a hole in the ground. I want this hole to have antialiased edges. I'm using System.Drawing for this. I've tried with clipping paths, and drawing with a transparent color using gfx.CompositingMode = CompositingMode.SourceCopy, but it gives me the same result. If I draw a circle with a solid color it works fine, but I need a hole, a circle with 0 alpha values. I have enabled these but they work only with solid colors: gfx.CompositingQuality = CompositingQuality.HighQuality; gfx.InterpolationMode = InterpolationMode.HighQualityBicubic; gfx.SmoothingMode = SmoothingMode.AntiAlias; In the two pictures consider black as being transparent. This is what I have (zoomed in): And what I need is something like this (made with photoshop): This will be just a visual effect, in code for collision detection I still treat everything with alpha 128 as solid. Edit: I'm usink OpenTK for this game. But for this question I think it doesn't really matter probably it is gdi+ related.

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • what tool should I use for drawing 2d OpenGL shapes?

    - by Kenny Winker
    I'm working on a very simple OpenGL ES 2.0 game, and I'm not sure what tool to use to create the vertex data I need. My first thought was adobe illustrator, but I can't seem to google up any info on how to convert an .ai file to vertices. I'm only going to be using very simple 2d shapes so I wonder if I need to use a 3d modelling program? How is this typically done, when you are working with 2d non-sprite shapes?

    Read the article

  • Get coordinates of arraylist

    - by opiop65
    Here's my map class: public class map{ public static final int CLEAR = 0; public static final ArrayList<Integer> STONE = new ArrayList<Integer>(); public static final int GRASS = 2; public static final int DIRT = 3; public static final int WIDTH = 32; public static final int HEIGHT = 24; public static final int TILE_SIZE = 25; // static int[][] map = new int[WIDTH][HEIGHT]; ArrayList<ArrayList<Integer>> map = new ArrayList<ArrayList<Integer>>(WIDTH * HEIGHT); enum tiles { air, grass, stone, dirt } Image air, grass, stone, dirt; Random rand = new Random(); public Map() { /* default map */ /*for(int y = 0; y < WIDTH; y++){ map[y][y] = (rand.nextInt(2)); System.out.println(map[y][y]); }*/ /*for (int y = 18; y < HEIGHT; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = STONE; } } for (int y = 18; y < 19; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = GRASS; } } for (int y = 19; y < 20; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = DIRT; } }*/ for (int y = 0; y < HEIGHT; y++) { for(int x = 0; x < WIDTH; x++){ map.set(x * WIDTH + y, STONE); } } try { init(null, null); } catch (SlickException e) { e.printStackTrace(); } render(null, null, null); } public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { air = new Image("res/air.png"); grass = new Image("res/grass.png"); stone = new Image("res/stone.png"); dirt = new Image("res/dirt.png"); } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) { for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y)) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } } public static boolean blocked(float x, float y) { return map[(int) x][(int) y] == STONE; } public static Rectangle blockBounds(int x, int y) { return (new Rectangle(x, y, TILE_SIZE, TILE_SIZE)); } } Specifically I am looking at this: for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y).intValue()) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } How can I access the coordinates of my arraylist map and then draw the tiles to the screen? Thanks!

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Circular motion on low powered hardware

    - by Akroy
    I was thinking about platforms and enemies moving in circles in old 2D games, and I was wondering how that was done. I understand parametric equations, and it's trivial to use sin and cos to do it, but could an NES or SNES make real time trig calls? I admit heavy ignorance, but I thought those were expensive operations. Is there some clever way to calculate that motion more cheaply? I've been working on deriving an algorithm from trig sum identities that would only use precalculated trig, but that seems convoluted.

    Read the article

  • Collision with half semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: -Pixel-perfect collision (microsoft has a sample(http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel_transformed)) , but then I wouldn't know how to change the meteor angle after the collision -3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. -Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

  • Collision Resolution

    - by CiscoIPPhone
    I know quite well how to check for collisions, but I don't know how to handle the collision in a good way. Simplified, if two objects collide I use some calculations to change the velocity direction. If I don't move the two objects they will still overlap and if the velocity is not big enough they will still collide after next update. This can cause objects to get stuck in each other. But what if I try to move the two objects so they do not overlap. This sounds like a good idea but I have realised that if there is more than two objects this becomes very complicated. What if I move the two objects and one of them collides with other objects so I have to move them too and they may collide with walls etc. I have a top down 2D game in mind but I don't think that has much to do with it. How are collisions usually handled? This question is asked on behalf of Wooh

    Read the article

  • Pixel Collision - Detecting corners

    - by Milkboat
    How would I go about detecting the corners of a texture when I use pixel collision detection? I read about corner collision with rectangles, but I am unsure how to adapt it to my situation. Right now my map is tile based and I do rectangular collision until the player is intersecting with a blocked tile, then I switch to pixel collision. The effect I would like to achieve is when the player hits the corner of an object to push him around the side so he doesn't just hit the edge and stop. Any ideas?

    Read the article

  • Collision detection, stop gravity

    - by Scott Beeson
    I just started using Gamemaker Studio and so far it seems fairly intuitive. However, I set a room to "Room is Physics World" and set gravity to 10. I then enabled physics on my player object and created a block object to match a platform on my background sprite. I set up a Collision Detection event for the player and the block objects that sets the gravity to 0 (and even sets the vspeed to 0). I also put a notification in the collision event and I don't get that either. I have my key down and key up events working well, moving the player left and right and changing the sprites appropriately, so I think I understand the event system. I must just be missing something simple with the physics. I've tried making both and neither of the objects "solid". Pretty frustrating since it looks so easy. The player starting point is directly above the block object in the grid and the player does fall through the block. I even made the block sprite solid red so I could see it (initially it was invisible, obviously).

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • How AlphaBlend Blendstate works in XNA 4 when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >