Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 417/1022 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Java 2D Rectangle Collision? [on hold]

    - by Andreas Elia
    I am just wanting to know of another (longer OR shorter) way of getting 100% effective collisions on a 2D plat-former. The current collision system that is in place works from coords on the level and does not always work reliably. Thank you in advance for any help/support. The current system draws a rectangle and is checking to see if any two points collide. From testing, the system can sometimes "glitch" and allow the player to collide into walls etc. Player Class http://pastebin.com/2zE8vz8R Main Class http://pastebin.com/A6Utb3ti

    Read the article

  • GameplayScreen does not contain a definition for GraphicsDevice

    - by Dave Voyles
    Long story short: I'm trying to intergrate my game with Microsoft's Game State Management. In doing so I've run into some errors, and the latest one is in the title. I'm not able to display my HUD for the reasons listed above. Previously, I had much of my code in my Game.cs class, but the GSM has a bit of it in Game1, and most of what you have drawn for the main screen in your GameplayScreen class, and that is what is causing confusion on my part. I've created an instance of the GameplayScreen class to be used in the HUD class (as you can see below). Before integrating with the GSM however, I created an instance of my Game class, and all worked fine. It seems that I need to define my graphics device somewhere, but I am not sure of where exactly. I've left some code below to help you understand. public class GameStateManagementGame : Microsoft.Xna.Framework.Game { #region Fields GraphicsDeviceManager graphics; ScreenManager screenManager; // Creates a new intance, which is used in the HUD class public static Game Instance; // By preloading any assets used by UI rendering, we avoid framerate glitches // when they suddenly need to be loaded in the middle of a menu transition. static readonly string[] preloadAssets = { "gradient", }; #endregion #region Initialization /// <summary> /// The main game constructor. /// </summary> public GameStateManagementGame() { Content.RootDirectory = "Content"; graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; graphics.IsFullScreen = false; graphics.ApplyChanges(); // Create the screen manager component. screenManager = new ScreenManager(this); Components.Add(screenManager); // Activate the first screens. screenManager.AddScreen(new BackgroundScreen(), null); //screenManager.AddScreen(new MainMenuScreen(), null); screenManager.AddScreen(new PressStartScreen(), null); } namespace Pong { public class HUD { public void Update(GameTime gameTime) { // Used in the Draw method titleSafeRectangle = new Rectangle (GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.X, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Y, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Width, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Height); } } } class GameplayScreen : GameScreen { #region Fields ContentManager content; public static GameStates gamestate; private GraphicsDeviceManager graphics; public int screenWidth; public int screenHeight; private Texture2D backgroundTexture; private SpriteBatch spriteBatch; private Menu menu; private SpriteFont arial; private HUD hud; Animation player; // Creates a new intance, which is used in the HUD class public static GameplayScreen Instance; public GameplayScreen() { TransitionOnTime = TimeSpan.FromSeconds(1.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } protected void Initialize() { lastScored = false; menu = new Menu(); resetTimer = 0; resetTimerInUse = true; ball = new Ball(content, new Vector2(screenWidth, screenHeight)); SetUpMulti(); input = new Input(); hud = new HUD(); // Places the powerup animation inside of the surrounding box // Needs to be cleaned up, instead of using hard pixel values player = new Animation(content.Load<Texture2D>(@"gfx/powerupSpriteSheet"), new Vector2(103, 44), 64, 64, 4, 5); // Used by for the Powerups random = new Random(); vec = new Vector2(100, 50); vec2 = new Vector2(100, 100); promptVec = new Vector2(50, 25); timer = 10000.0f; // Starting value for the cooldown for the powerup timer timerVector = new Vector2(10, 10); //JEP - one time creation of powerup objects playerOnePowerup = new Powerup(); playerOnePowerup.Activated += PowerupActivated; playerOnePowerup.Deactivated += PowerupDeactivated; playerTwoPowerup = new Powerup(); playerTwoPowerup.Activated += PowerupActivated; playerTwoPowerup.Deactivated += PowerupDeactivated; //JEP - moved from events since these only need set once activatedVec = new Vector2(100, 125); deactivatedVec = new Vector2(100, 150); powerupReady = false; }

    Read the article

  • Which Game Engine to Use for an Angry Bird style game? [JAVA] [on hold]

    - by Arch1tect
    Our team is building an Angry Bird Style game, and we have only about ten days. The game is a little more complex than Angry Bird because there are two players, they each have a castle with pigs to protect(not destroy:)). And the goal is to destroy the other player's pigs. I wonder what Game Engine would help us finish this game most efficiently. We at least need a physics engine but I guess game engine is more helpful since it usually includes physics engine. Correct me if I'm wrong. (So I'm wondering which game engine I should use, if it's just physics engine, I'll use box2d) Networking may or may not be added later depend on time we have. Thanks in advance for any advice! EDIT: image looks small, I'll add one:

    Read the article

  • Draw Rectangle To All Dimensions of Image

    - by opiop65
    I have some rudimentary collision code: public class Collision { static boolean isColliding = false; static Rectangle player; static Rectangle female; public static void collision(){ Rectangle player = Game.Playerbounds(); Rectangle female = Game.Femalebounds(); if(player.intersects(female)){ isColliding = true; }else{ isColliding = false; } } } And this is the rectangle code: public static Rectangle Playerbounds() { return(new Rectangle(posX, posY, 25, 25)); } public static Rectangle Femalebounds() { return(new Rectangle(femaleX, femaleY, 25, 25)); } My InputHandling class: public static void movePlayer(GameContainer gc, int delta){ Input input = gc.getInput(); if(input.isKeyDown(input.KEY_W)){ Game.posY -= walkSpeed * delta; walkUp = true; if(Collision.isColliding == true){ Game.posY += walkSpeed * delta; } } if(input.isKeyDown(input.KEY_S)){ Game.posY += walkSpeed * delta; walkDown = true; if(Collision.isColliding == true){ Game.posY -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_D)){ Game.posX += walkSpeed * delta; walkRight = true; if(Collision.isColliding == true){ Game.posX -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_A)){ Game.posX -= walkSpeed * delta; walkLeft = true; if(Collision.isColliding == true){ Game.posX += walkSpeed * delta; } } } The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides? Thanks for any suggestions!

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • How can I attach a model to the bone of another model?

    - by kaykayman
    I am trying to attach one animated model to one of the bones of another animated model in an XNA game. I've found a few questions/forum posts/articles online which explain how to attach a weapon model to the bone of another model (which is analogous to what I'm trying to achieve), but they don't seem to work for me. So as an example: I want to attach Model A to a specific bone in Model B. Question 1. As I understand it, I need to calculate the transforms which are applied to the bone on Model B and apply these same transforms to every bone in Model A. Is this right? Question 2. This is my code for calculating the Transforms on a specific bone. private Matrix GetTransformPaths(ModelBone bone) { Matrix result = Matrix.Identity; while (bone != null) { result = result * bone.Transform; bone = bone.Parent; } return result; } The maths of Matrices is almost entirely lost on me, but my understanding is that the above will work its way up the bone structure to the root bone and my end result will be the transform of the original bone relative to the model. Is this right? Question 3. Assuming that this is correct I then expect that I should either apply this to each bone in Model A, or in my Draw() method: private void DrawModel(SceneModel model, GameTime gametime) { foreach (var component in model.Components) { Matrix[] transforms = new Matrix[component.Model.Bones.Count]; component.Model.CopyAbsoluteBoneTransformsTo(transforms); Matrix parenttransform = Matrix.Identity; if (!string.IsNullOrEmpty(component.ParentBone)) parenttransform = GetTransformPaths(model.GetBone(component.ParentBone)); component.Player.Update(gametime.ElapsedGameTime, true, Matrix.Identity); Matrix[] bones = component.Player.GetSkinTransforms(); foreach (SkinnedEffect effect in mesh.Effects) { effect.SetBoneTransforms(bones); effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(model.Angle)) * Matrix.CreateTranslation(model.Position) * parenttransform; effect.View = getView(); effect.Projection = getProjection(); effect.Alpha = model.Opacity; } } mesh.Draw(); } I feel as though I have tried every conceivable way of incorporating the parenttransform value into the draw method. The above is my most recent attempt. Is what I'm trying to do correct? And if so, is there a reason it doesn't work? The above Draw method seems to transpose the models x/z position - but even at these wrong positions, they do not account for the animation of Model B at all. Note: As will be evident from the code my "model" is comprised of a list of "components". It is these "components" that correspond to a single "Microsoft.Xna.Framework.Graphics.Model"

    Read the article

  • Rotate an image in a scaled context

    - by nathan
    Here is my working piece of code to rotate an image toward a point (in my case, the mouse cursor). float dx = newx - ploc.x; float dy = newy - ploc.y; float angle = (float) Math.toDegrees(Math.atan2(dy, dx)); Where ploc is the location of the image i'm rotating. And here is the rendering code: g.rotate(loc.x + width / 2, loc.y + height / 2, angle); g.drawImage(frame, loc.x, loc.y); Where loc is the location of the image and "width" and "height" are respectively the width and height of the image. What changes are needed to make it works on a scaled context? e.g make it works with something like g.scale(sx, sy).

    Read the article

  • Protection against CheatEngine and other injectors [duplicate]

    - by Lucas
    This question already has an answer here: Strategies to Defeat Memory Editors for Cheating - Desktop Games 10 answers Is protection against CheatEngine and other inject tools are possible to do? I was thinking a day and the only one idea I've got is about writting some small application which will scan the processes running every second, and in case if any injector will be found the game client will exit immadiately. I'm writing here to see your opinions on this case as some of you may have some expierence against protecting the game clients against DLL or PYC injection or something.

    Read the article

  • Controlling a GameObject from another GameObject's script component

    - by OhMrBigshot
    I'm creating a game where when starting the game, a Cube is duplicated GridSize * GridSize times when the game starts. Now, after the cubes are duplicated I want to attach a variable to them, say "Flag" which is a bool, from another script component (let's say I have a Prefab that generates the cloned cubes). In short, I have something like this: CreateTiles.cs : Attached to Prefab void Start() { createMyTiles(); // a function that clones the tiles flagRandomTiles(); // a function that (what I'm trying to do) "Flags" 10 random cubes } CubeBehavior.cs : Attached to each Cube public bool hasFlag; // other stuff Now, I want flagRandomTiles() to set a Cube's hasFlag property via code, assuming I have access to them via a GameObject[] array. Here's what I've tried: Cubes[x].hasFlag = true; - No access. Making a function such as Cubes[x].setHasFlag(true) - still no access. Initializing Cubes as a CubeBehavior object array, then doing the above - GameObjects can't be converted to CubeBehaviors - I get this error when I try to assign the Cubes into the array. How do I do this?

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • Collision detection, stop gravity

    - by Scott Beeson
    I just started using Gamemaker Studio and so far it seems fairly intuitive. However, I set a room to "Room is Physics World" and set gravity to 10. I then enabled physics on my player object and created a block object to match a platform on my background sprite. I set up a Collision Detection event for the player and the block objects that sets the gravity to 0 (and even sets the vspeed to 0). I also put a notification in the collision event and I don't get that either. I have my key down and key up events working well, moving the player left and right and changing the sprites appropriately, so I think I understand the event system. I must just be missing something simple with the physics. I've tried making both and neither of the objects "solid". Pretty frustrating since it looks so easy. The player starting point is directly above the block object in the grid and the player does fall through the block. I even made the block sprite solid red so I could see it (initially it was invisible, obviously).

    Read the article

  • How do I calculate opposite of a vector, add some slack

    - by Jason94
    How can i calulate a valid range (RED) for my object's (BLACK) traveling direction (GREEN). The green is a Vector2 where x and y range is -1 to 1. What I'm trying to do here is to create rocket fuel burn effekt. So what i got is rocket speed (float) rocket direction (Vector2 x = [-1, 1], y = [-1, 1]) I may think that rocket speed does not matter as fuel burn effect (particle) is created on position with its own speed.

    Read the article

  • Splitting Graph into distinct polygons in O(E) complexity

    - by Arthur Wulf White
    If you have seen my last question: trapped inside a Graph : Find paths along edges that do not cross any edges How do you split an entire graph into distinct shapes 'trapped' inside the graph(like the ones described in my last question) with good complexity? What I am doing now is iterating over all edges and then starting to traverse while always taking the rightmost turn. This does split the graph into distinct shapes. Then I eliminate all the excess shapes (that are repeats of previous shapes) and return the result. The complexity of this algorithm is O(E^2). I am wondering if I could do it in O(E) by removing edges I already traversed previously. My current implementation of that returns unexpected results.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • Questions about game states

    - by MrPlow
    I'm trying to make a framework for a game I've wanted to do for quite a while. The first thing that I decided to implement was a state system for game states. When my "original" idea of having a doubly linked list of game states failed I found This blog and liked the idea of a stack based game state manager. However there were a few things I found weird: Instead of RAII two class methods are used to initialize and destroy the state Every game state class is a singleton(and singletons are bad aren't they?) Every GameState object is static So I took the idea and altered a few things and got this: GameState.h class GameState { private: bool m_paused; protected: StateManager& m_manager; public: GameState(StateManager& manager) : m_manager(manager), m_paused(false){} virtual ~GameState() {} virtual void update() = 0; virtual void draw() = 0; virtual void handleEvents() = 0; void pause() { m_paused = true; } void resume() { m_paused = false; } void changeState(std::unique_ptr<GameState> state) { m_manager.changeState(std::move(state)); } }; StateManager.h class GameState; class StateManager { private: std::vector< std::unique_ptr<GameState> > m_gameStates; public: StateManager(); void changeState(std::unique_ptr<GameState> state); void StateManager::pushState(std::unique_ptr<GameState> state); void popState(); void update(); void draw(); void handleEvents(); }; StateManager.cpp StateManager::StateManager() {} void StateManager::changeState( std::unique_ptr<GameState> state ) { if(!m_gameStates.empty()) { m_gameStates.pop_back(); } m_gameStates.push_back( std::move(state) ); } void StateManager::pushState(std::unique_ptr<GameState> state) { if(!m_gameStates.empty()) { m_gameStates.back()->pause(); } m_gameStates.push_back( std::move(state) ); } void StateManager::popState() { if(!m_gameStates.empty()) m_gameStates.pop_back(); } void StateManager::update() { if(!m_gameStates.empty()) m_gameStates.back()->update(); } void StateManager::draw() { if(!m_gameStates.empty()) m_gameStates.back()->draw(); } void StateManager::handleEvents() { if(!m_gameStates.empty()) m_gameStates.back()->handleEvents(); } And it's used like this: main.cpp StateManager states; states.changeState( std::unique_ptr<GameState>(new GameStateIntro(states)) ); while(gamewindow::gameWindow.isOpen()) { states.handleEvents(); states.update(); states.draw(); } Constructors/Destructors are used to create/destroy states instead of specialized class methods, state objects are no longer static but

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • Tool to convert Textures to power of two?

    - by 3nixios
    I'm currently porting a game to a new platform, the problem being that the old platform accepted non power of two textures and this new platform doesn't. To add to the headache, the new platform has much less memory so we want to use the tools provided by the vendor to compress them; which of course only takes power of two textures. The current workflow is to convert the non power of tho textures to dds with 'texconv', then use the vendors compression tools in a batch. So, does anyone know of a tool to convert textures to their nearest 'power of two' counterparts? Thanks

    Read the article

  • How can player actions be "judged morally" in a measurable way?

    - by Sebastien Diot
    While measuring the player "skills" and "effort" is usually easy, adding some "less objective" statistics can give the player supplementary goals, especially in a MUD/RPG context. What I mean is that apart from counting how many orcs were killed, and gems collected, it would be interesting to have something along the line of the traditional Good/Evil, Lawful/Chaotic ranking of paper-based RPG, to add "dimension" to the game. But computers cannot differentiate good/evil effectively (nor can humans in many cases), and if you have a set of "laws" which are precise enough that you can tell exactly when the player breaks them, then it generally makes more sense to actually prevent them from doing that action in the first place. One example could be the creation/destruction axis (if players are at all allowed to create/build things), possibly in the form of the general effect of the player actions on "ecology". So what else is there left that can be effectively measured and would provide a sense of "moral" for the player? The more axis I have to measure, the more goals the player can have, and therefore the longer the game can last. This also gives the players more ways of "differentiating" themselves among hordes of other players of the same "class" and similar "kit".

    Read the article

  • Ensuring that saved data has not been edited in a game with both offline and online components

    - by Omar Kooheji
    I'm in the pre-planning phase of coming up with a game design and I was wondering if there was a sensible way to stop people from editing saves in a game with offline and online components. The offline component would allow the player to play through the game and the online component would allow them to play against other players, so I would need to make sure that people hadn't edited the source code/save files while offline to gain an advantage while online. Game likely to be developed in either .Net or Java, both of which are unfortunately easy to decompile.

    Read the article

  • Mapping a 3D texture to a standard hollow-hull 3D model

    - by John
    I have 3D models which are typical hollow hulls. If such a model also had a 3D volumetric/voxel texture map then given a point P inside such a model, I'd like to be able to find its uvw coordinates within the 3D texture. Is this possible by simply setting 3D texcoords on my existing mesh or does it have to be broken up into polyhedra? Is there a way to map a 3D texture onto a mesh without doing this?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >