Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 528/1039 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • XNA 4: RenderTarget2D textures getting transparent on fullscreen

    - by Shashwat
    I'm generating a Texture2D object using RenderTarget2D as in the following code public static Texture2D GetTextTexture(string text, Vector2 position, SpriteFont font, Color foreColor, Color backColor, Texture2D background=null) { int width = (int)font.MeasureString(text).X; int height = (int)font.MeasureString(text).Y; GraphicsDevice device = Settings.game.GraphicsDevice; SpriteBatch spriteBatch = Settings.game.spriteBatch; RenderTarget2D renderTarget = new RenderTarget2D(device, width, height, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, device.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents); device.SetRenderTarget(renderTarget); device.Clear(backColor); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque); if (background != null) spriteBatch.Draw(background, new Rectangle(0, 0, 70, 70), Color.White); spriteBatch.End(); spriteBatch.Begin(); spriteBatch.DrawString(font, text, position, foreColor, 0, new Vector2(0), 0.8f, SpriteEffects.None, 0); spriteBatch.End(); device.SetRenderTarget(null); ResetGraphicsDeviceSettings(); return (Texture2D)renderTarget; } It's working all fine. But when I ToggleFullScreen() (and vice-versa), the previous textures are getting transparent. However, the new textures after that are being generated correctly. What can be the reason for this?

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • Updating "Inactive" Chunks

    - by Conner Bryan
    In my game, the only chunks (4x4 areas of tiles) in memory are the ones that the player is in. However, chunks need to have updates applied to them over time. A (likely) well-known example would be MineCraft: even if the player isn't in a chunk, the wheat still needs to grow over time. My current solution is to call a method and pass in the time since the chunk was active.. but what if the chunk depends on nearby chunks for information, i.e. vines spreading or similar? Is there any reasonable solutions to this problem, or should I simply not depend on nearby chunks?

    Read the article

  • How can I get my first-person character in Unity to move to a ledge with an animation?

    - by BallzOfSteel
    I'm trying to get this to happen: The character walks up to a large crate, the player presses a button, and an animation starts playing where the character climbs up on to the crate. (all in first person view). So far I tried this with normal "First Person Controller" Prefab in Unity. My code so far: function OnTriggerStay(other : Collider){ if(other.tag == "GrabZone"){ if(Input.GetKeyDown("e")){ animation.Play("JumpToLedge"); } } } However when i use this on The FPC it will always play from the position the animation is created on. I also tried to create an empty game object, placing the FPC in there. Gives same effect. I also tried just animating the graphics of the FPC alone. This seems to work but since the Character Controller itself is not animated that stays onthe ground. So the whole FPC wont work anymore. Is there anyway i could let this animation play on the local position the player is on at that time? Or can you think of any other logical solution for a grab and climb?

    Read the article

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • How do I best remove an entity from my game loop when it is dead?

    - by Iain
    Ok so I have a big list of all my entities which I loop through and update. In AS3 I can store this as an Array (dynamic length, untyped), a Vector (typed) or a linked list (not native). At the moment I'm using Array but I plan to change to Vector or linked list if it is faster. Anyway, my question, when an Entity is destroyed, how should I remove it from the list? I could null its position, splice it out or just set a flag on it to say "skip over me, I'm dead." I'm pooling my entities, so an Entity that is dead is quite likely to be alive again at some point. For each type of collection what is my best strategy, and which combination of collection type and removal method will work best?

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • Mouse pointer position to screen space

    - by Ylisar
    If I have a mouse pointer position in pixels of canvas, I can easily convert it to the -1..1 range for both X & Y by lerping by dividing with canvas dimensions. However, the problem is what I should put in Z & W if I want my screen space position to be on the near plane? The step afterwards would be for me to multiply by the inverse of view-projection to take me to world space, where I easily can construct a ray from the cameras world space position.

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • How to calculate direction from initial point and another point?

    - by Dvole
    I'm making a simple game where I shoot things from a certain point on screen (A). I tap the screen and shoot the projectile from initial point(A) to the tap point(B). But I want the projectile to move along the same path instead and fly out of bounds of the screen. How do I calculate a point that is on the same line that these two points, but further away? This is a simple math, but I can't figure it out.

    Read the article

  • How can I locate the frames of a spritesheet PNG based on this PLIST data?

    - by kitsune
    Someone asked me to reskin a certain game. Now he only sent me the whole sprite PNG and PLIST files of the sprites. He instructed me to rename each sprite with the same name corresponding to each original sprite. The problem is, he gave me the whole sprite sheet instead of each individual sprite and the PLIST. Now yes, I can read the PNG filenames from the PLIST, but I cannot rename the reskin sprites I did because I'm not sure which sprite is boy_gun_3_3.png; there are multiple guns, I don't know which is which. Is there a way to extract individual accurately named individual PNG files from the single sprite sheet using the PLIST?

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • "Unclutter" units in RTS game

    - by TravisG
    For intentional reasons, certain units in the game I'm currently programming don't have any collision detection and response among each other. This enables them to clutter right on top of each other. This is a wanted behavior, since there will be situations in the game when the player does want them to stack like that. However, I want to make the process of uncluttering them easy for the player, so that they just have to press a hotkey or click some button on the screen and have the units disperse just enough so it's easy to select a group of them with the mouse (if they stand on top of each other one mouseclick selects all units). How could I do this without running a brute force N^2 nearest neighbor search on all units?

    Read the article

  • How do I optimize searching for the nearest point?

    - by Rootosaurus
    For a little project of mine I'm trying to implement a space colonization algorithm in order to grow trees. The current implementation of this algorithm works fine. But I have to optimize the whole thing in order to make it generate faster. I work with 1 to 300K of random attraction points to generate one tree, and it takes a lot of time to compute and compare distances between attraction points and tree node in order to keep only the closest treenode for an attraction point. So I was wondering if some solutions exist (I know they must exist) in order to avoid the time loss looping on each tree node for each attraction point to find the closest... and so on until the tree is finished.

    Read the article

  • How do I multiply pixels on an SDL Surface?

    - by NoobScratcher
    Okay so I'm able to put blank pixels into a surface and also draw gradient pixels rectangles,etc But I don't know how to multiply the pixels on a surface so I was hoping someone could provide me information on this topic. I was thinking you could get the members pixel and then * it by 2 but that didn't provide results I wanted so I'm now thinking that you have to actually get to the position in bytes in one location to the left and one location to the right and then store it in memory and then * that by 2 am I correct or what? If so what is it that allows me to do that and how do I do that?

    Read the article

  • Artifacts when draw particles with some alpha

    - by piotrek
    I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); My fragment shader for particle is very simple: precision highp float; precision highp int; uniform sampler2D u_maps[2]; varying vec2 v_texture; uniform float opaque; uniform vec3 colorize; void main() { vec4 texColor = texture2D(u_maps[0], v_texture); gl_FragColor.rgb = texColor.rgb * colorize.rgb; gl_FragColor.a = texColor.a * opaque; } I attach screenshot from this: Do you know what I made wrong ? I use OpenGL ES 2.0.

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

  • Android loading screens blocking, good practice?

    - by Oren
    I've noticed many (if not all) android games don't support the "back" button functionality during their loading screens. Which leads to some frustrating moments when a user accidentally starts up the game and has to wait for the long loading stage to end in order to close it. So my questions are: 1) Why is that ? Is there a good reason to avoid something like asynchronous loading (or some other solution to this problem) in android games ? 2) If there is no good reason not to support this functionality, what would be the best way to accomplish it ?

    Read the article

  • Need to produce an animated texture of Water where each image tiles in all directions

    - by ProfVersaggi
    I need to produce a 2D 'animated' texture of "water" for a game in which each image tiles in 'all' directions, much like those produced by the Caustics Generator, but with the power and flexibility of something the likes of Blender. The final result from Caustics Generator is 32 images that are actually animated such that when the full 32 images are played in a loop they will seamlessly loop forever. They will not only loop in time, but each image also tile in all directions. This is nice, but it comes in only one flavor so to speak. I'd like to accomplish the same thing with a Blender type tool, and I have actually gotten to the point where I generate the X number of images, but they do not tile in 'all' directions, nor are they slightly animated. I've tried Blender texture animations using offsets but with only limited success. Does anyone know of how to (or of a tool) which will animate textures such that they tile in all (4) directions? Many thanks in advance ....

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >