Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 522/1332 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Projecting onto different size screens by cropping

    - by Jason
    Hi, I am building a phone application which will display a shape on screen. The shape should look the same on different screen sizes. I. Decided the best way to do this is to show more of the background on larger screen keeping the shapes proportion the same on all screens. My problem is I am not sure how to achieve this, I can query the screen size at runtime and calculate how different it is from the six is designed for but I am not sure what to do with this value. What kind of projection should I use for my orthographic matrix an hour will I display more on larger screens and not loose information on smaller screens? Thanks, Jason.

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • How can I pass an external instance to the constructor of an object that's being created using the default XNA XML content loader?

    - by Michael
    I'm trying to understand how to use the XNA XML content importer to instantiate non-trivial objects that are more than a collection of basic properties (e.g., a class that inherits from DrawableGameObject or GameObject and requires other things to be passed into its constructor). Is it possible to pass existing external instances (e.g., an instance of the current Game) to the constructor of an object that's being created using the default XNA XML content loader? For example, imagine that I have the following class, inheriting from DrawableGameComponent: public class Character : DrawableGameComponent { public string Name { get; set; } public Character(Game game) : base(game) { } public override void Update(GameTime gameTime) { } public override void Draw(GameTime gameTime) { } } If I had a simple class that did not need other parameters in its constructor (i.e., the Game instance), then I could simply use this XML: <XnaContent> <Asset Type="MyNamespace.Character"> <Name>John Doe</Name> </Asset> </XnaContent> ...and then create an instance of Character using this code: var character = Content.Load<Character>("MyXmlAssetName"); But that won't work because I need to pass the need to pass the Game into the constructor. What's the best way to handle this situation? Is there a way to pass in things like the current Game using the default XNA XML content loader? Do I need to write my own XML loader? (If so, how?) Is there a better object-oriented design that I should be using for my classes? Note: Although I used Game in this example, I'm really just asking how to pass any type of existing instance to my constructors. (For example, I'm using the Farseer Physics Engine, and some of my classes also need a reference to the Farseer World object too.) Thanks in advance.

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • Creating a 2D Line Branch

    - by Danran
    I'm looking into creating a 2D line branch, something for a "lightning effect". I did ask this question before on creating a "lightning effect" (mainly though referring to the process of the glow & after effects the lightning has & to whether it was a good method to use or not); Methods of Creating a "Lightning" effect in 2D However i never did get around to getting it working. So i've been trying today to get a seconded attempt going but i'm getting now-were :/. So to be clear on what i'm trying to-do, in this article posted; http://drilian.com/2009/02/25/lightning-bolts/ I'm trying to create the line segments seen in the images on the site. I'm confused mainly by this line in the pseudo code; // Offset the midpoint by a random amount along the normal. midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount); If someone could explain this to me it would be really grateful :).

    Read the article

  • Has a multi player graphic adventure* ever been made?

    - by Petruza
    By graphic adventure, I mean point & click LucasArts-type games. Those games have a mostly linear structure in nature, and usually don't offer as many variants as other games types like action, rpg, strategy, which makes this genre difficult to implement a multi-player feature. I'd like to know if there has been any attempts on doing such a thing, and if it would be viable, as players going offline or leaving a game in the middle would affect significantly the other players' game.

    Read the article

  • Make objects slide across the screen in random positions

    - by user3475907
    I want to make an object appear randomly at the right hand side of the screen and then slide across the screen and disapear at the left hand side. I am working with libgdx. I have this bit of code but it makes items fall from the top down. Please help. public EntityManager(int amount, OrthoCamera camera) { player = new Player(new Vector2(15, 230), new Vector2(0, 0), this, camera); for (int i = 0; i < amount; i++) { float x = MathUtils.random(0, MainGame.HEIGHT - TextureManager.ENEMY.getHeight()); float y = MathUtils.random(MainGame.WIDTH, MainGame.WIDTH * 10); float speed = MathUtils.random(2, 10); addEntity(new Enemy(new Vector2(x, y), new Vector2(-0, -speed))); }

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Developing games using virtualization on macOS (or Linux) [on hold]

    - by zpinner
    From what I've seen, most of the gamedev tools and engines (that could generate cross platform games) are not supported on Mac. Havok/Project Anarchy, UDK, GameMaker, e.g. . Basically, the only options I found are: Unity3d and monogame + xamarin. Unity is nice and I've been playing with it for some time, but the free version is quite limited when we're talking about shaders, that made me consider that as an indie developer, I might want more freedom to experiment new things, without paying the expensive unity license. I didn't try monogame + xamarin yet, and altough XNA is a very nice game framework, I'd like to have more freedom to experiment and finish a game first before paying for the IDE, which is not possible with the current Xamarin business model. That leaves me with the thought that I must go back to windows, which I'd preferably do it partially, if it's possible. Using BootCamp is something that I'd like to avoid, since it's a pain to reboot when changing OS and that would probably force me to become a 100% windows user. Is there anyone actually developing a game using virtualization solutions like parallels or vmwareFusion? How was your experience?

    Read the article

  • 2D management game [on hold]

    - by Simon Bull
    Very newbie question but I have a game idea in mind. It will be 2d and data centric, like football manager. However I am struggling to find a platform that would suit. I am an experienced line of business developer so am happy to write code, but I would like a platform that does some of the leg work for me so was avoiding OpenGL. I would also like to be able deploy to iOS, android, windows and OS X. What are the options? To be more clear, the game is not a normal platform or shooter type game, so game maker is likely to be way too basic and unity seems a little over the top (though I am not sure if the GUI options would fit?). The majority of the game is more like business screens just displaying data and having buttons to click. Are there options for this type of game (May help to look at football manager)?

    Read the article

  • Will making players pay a virtual currency before entering a match discourage them from playing?

    - by Bane
    I'm making a multiplayer match-making game, and by my current design, people will need to pay a small fee before joining a match. At the end of the match, the team that won will get the money. That will be a virtual currency, but still, will it discourage people to enter matches? I introduced it to make the matches matter more, because there's always a fear that you will loose your investments. I'm not talking about anything big here, but even a small amount might have a similar psychological effect as a bigger one.

    Read the article

  • Sending state diffs (deltas) and unreliable connections

    - by spaceOwl
    We're building a realtime multiplayer game, in which each player is responsible for reporting its state on every iteration of the game loop. The state updates are broadcasted using unreliable UDP. To minimize state data sending, we've come up with a system that will send only deltas (whatever state data that was changed). This method however is flawed, since a lost packet will mean that other players will not receive the delta, making the game behave in an unexpected way. For example: Assume that state is comprised of: { positionX, positionY, health } Frame 1 - positionX changed --> send a packet with positionX only. Frame 2 - health changed // lost ! Frame 3 - positionY changed --> send a packet with positionY only. // Other players don't know about health change. How can one overcome this issue then? sending the entire data is not always feasible.

    Read the article

  • Render angles of a 3D model into 2D images?

    - by Ricket
    Is there a tool out there that you can give a 3D model file, and it will output 2D renders of it from various angles? For example if you were making a 2D RPG but you want to make your character look nice, you might make the character in 3D and then just render the character from 8 or more angles into images which then are used by the 2D engine to give a pseudo-3D look. Does such a tool exist or will it need to be custom-written or done manually?

    Read the article

  • Can I use remade sprites in my game?

    - by John Skridles
    Can I use remade sprites in my game? I am making a game and I used some sprites, but I didn't copy them. I remade them completely the character looks nothing like the original. I only did this to get the movement of the character right (moving, running, jumping, punching). I've been working on the game for a long time, so I really need to know is it safe and legal to do this. I do intend making a small profit.

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • Why can't I compare two Texture2D's?

    - by Fiona
    I am trying to use an accessor, as it seems to me that that is the only way to accomplish what I want to do. Here is my code: Game1.cs public class GroundTexture { private Texture2D dirt; public Texture2D Dirt { get { return dirt; } set { dirt = value; } } } public class Main : Game { public static Texture2D texture = tile.Texture; GroundTexture groundTexture = new GroundTexture(); public static Texture2D dirt; protected override void LoadContent() { Tile tile = (Tile)currentLevel.GetTile(20, 20); dirt = Content.Load<Texture2D>("Dirt"); groundTexture.Dirt = dirt; Texture2D texture = tile.Texture; } protected override void Update(GameTime gameTime) { if (texture == groundTexture.Dirt) { player.TileCollision(groundBounds); } base.Update(gameTime); } } I removed irrelevant information from the LoadContent and Update functions. On the following line: if (texture == groundTexture.Dirt) I am getting the error Operator '==' cannot be applied to operands of type 'Microsoft.Xna.Framework.Graphics.Texture2D' and 'Game1.GroundTexture' Am I using the accessor correctly? And why do I get this error? "Dirt" is Texture2D, so they should be comparable. This using a few functions from a program called Realm Factory, which is a tile editor. The numbers "20, 20" are just a sample of the level I made below: tile.Texture returns the sprite, which here is the content item Dirt.png Thank you very much! (I posted this on the main Stackoverflow site, but after several days didn't get a response. Since it has to do mainly with Texture2D, I figured I'd ask here.)

    Read the article

  • Picking a suitable resolution for a modern low-res game?

    - by MrKatSwordfish
    I'm working on a 2D game project right now (using SFML+OpenGL and C++) and I'm trying to figure out how to go about choosing a resolution. I want my game to have a pixel resolution that is around that of classic '16bit' era consoles like the Super Nintendo or Neo Geo. However, I'd also like to have my game fit the 16:9 aspect ratio that most modern PC monitors use. Finally I'd like to be able to include an option for running full screen. I know that I could create my own low-res 16:9 resolution that is more-or-less around the size of SNES or NeoGeo games. However, the problem seems to be that doing so would leave me with a non-standard resolution that my monitor would not be able to support in fullscreen mode. For example, if i divide the common 16:9 resolution 1920x1080 by 4, I would get a 16:9 resolution that is relatively close to the resolution used by 16bit era games; 480x270. That would be fine in a windowed mode, but I don't think that it would be supported in fullscreen mode. How can I choose a resolution that suits my needs? Can I use something like 480x270? If so, how would I go about getting fullscreen mode to work with such a non-standard resolution? (I'm guessing OpenGL/SFML might have a way of up-scaling...but..)

    Read the article

  • What are effective marketing strategies for iPhone games?

    - by Artemix
    So, long story short, some days ago I published an iPhone game, I think the game wasn't that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and that's all? Somehow I think that even if you have an awesome game if you don't do that "marketing magic" correctly you will not exist in the store. Now I'm making a second game, completely different, and I want to know how to do things right. If anyone knows something about this topic, let me know.

    Read the article

  • How do I find which isometric tiles are inside the cameras current view?

    - by Steve
    I'm putting together an isometric engine and need to cull the tiles that aren't in the camera's current view. My tile coordinates go from left to right on the X and top to bottom on the Y with (0,0) being the top left corner. If I have access to say the top left, top right, bottom left and bottom right corner coordinates, is there a formula or something I could use to determine which tiles fall in range? This is a screenshot of the layout of the tiles for reference. If there isn't one, or there's a better way to determine which tiles are on screen and which to cull, I'm all ears and am grateful for any ideas. I've got a few other methods I may be able to try such as checking the position of the tile against a rectangle. I pretty much just need something quick. Thanks for giving this a read =)

    Read the article

  • What is a good practice for 2D scene graph partitioning for culling?

    - by DevilWithin
    I need to know an efficient way to cull the scene graph objects, to render exclusively the ones in the view, and as fast as possible. I am thinking of doing it the following way, having in each object a local boundingbox which holds the object bounds, and a global boundingbox which holds the bounds of the object and all children. When a camera is moved, the render list is updated by traversing the global boundingboxes. When only the object is being moved, it tries to enlarge or shrink the ancestors global boundingboxes, and in the end updating or not the renderlist. What do you think of this approach? Do you think it will provide a fast and efficient culling? Also, because the render list is a contiguous list, it could accelerate the rendering, right? Any further tips for a 2D scene graphs are highly appreciated!

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >