Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 420/1328 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • What are the cons of using DrawableGameComponent for every instance of a game object?

    - by Kensai
    I've read in many places that DrawableGameComponents should be saved for things like "levels" or some kind of managers instead of using them, for example, for characters or tiles (Like this guy says here). But I don't understand why this is so. I read this post and it made a lot of sense to me, but these are the minority. I usually wouldn't pay too much attention to things like these, but in this case I would like to know why the apparent majority believes this is not the way to go. Maybe I'm missing something.

    Read the article

  • How can I implement 2D cel shading in XNA?

    - by Artii
    So I was just wondering on how to give a scene I am rendering a hand drawn look (like say Crayon Physics). I don't really want to preprocess the sprites and was thinking of using a shader. Cel shading supplies the effect I want to achieve, but I am only aware of the 3D instances for it. So I wanted to ask if anyone knew a way to get this effect in 2D, or if cel shading would work just as fine on 2D scenes?

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • Rendering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    Adobe recently released the Flash C++ Compiler, which UDK uses to target Flash Player. Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • Trying to develop a game with android for cracking glasses in different dimensions

    - by user46514
    I am trying to develop a game in android where I will have to punch a hole to get through the glass but not shatter the glass completely. The glass will show up in different forms of polygons and when a hole is created by a projectile, the rest of the polygon will still remain intact.Only a polygonal opening will get created at the point of impact with the projectile. I am new at game design in android but I was thinking that I would create a random polygon shape to show in the path and then at the point where the projectile hits it, I could create a glass polygon to create a splinter effect. The rest of the part of the glass that is randomly created at the point of impact, I could further splinter it into polygons flying at different angle. since I also need to capture the bits of glasses flying off and falling down with gravity. Is my solution the best efficient one at performance of threads or is there a better solution for this glass breaking effect. Thanks Dhiren

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • How to Hide the code of HTML5 games [closed]

    - by jeyanthinath
    Possible Duplicate: HTML5 game obfuscation I am begin to develop games in HTML5 and I had doubt that , when we use the game in online its source can be visible to others even if we use complex code and reference to java-script files , then what is the use of HTML5 even everyone can be able to download the code and still use their updated version Is it possible to hide the code of HTML5 in web page games OR there some other way it can made it not visible to the users !!! If not what is the use of HTML5 as it is open to user as well !!!

    Read the article

  • Box2D Bicycle Wheels Motor Problem - Flash 2.1a

    - by Craig
    I have made a bicycle with Box2D using several polygons for the frame at different angles connected using weld joints, and I have revolute joints on the wheels with a motor. I have made some basic terrain (straight ground and a small ramp) and added keyboard input to control the bicycle with torque to balance it. All of this is done in with Box2D's Debug Draw. When the bicycle is on its back wheel but diagonally forward (kinda like this position - /) the motors just cause it go spinning backwards over when in reality it should either stay on its back wheel or go down onto both wheels. Here's my code the revolute joints: //Front Wheel Joint var frontWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); frontWheelJointDef.Initialize(frontWheelBody, secondFrameBody, frontWheelBody.GetWorldCenter()); frontWheelJointDef.enableMotor=true; frontWheelJointDef.maxMotorTorque=10000; frontWheelJoint = _world.CreateJoint(frontWheelJointDef) as b2RevoluteJoint; //Rear Wheel Joint var rearWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); rearWheelJointDef.Initialize(rearWheelBody, firstFrameBody, rearWheelBody.GetWorldCenter()); rearWheelJointDef.enableMotor=true; rearWheelJointDef.maxMotorTorque=10000; rearWheelJoint = _world.CreateJoint(rearWheelJointDef) as b2RevoluteJoint; And here's the relevant part of my update function: // up and down control wheels motor if (up) { motorSpeed-=0.5; } if (down) { motorSpeed += 0.5; } // left and right control cart torque if (left) { middleCentreFrameBody.ApplyTorque( -3); gearBody.ApplyTorque( -3); firstFrameBody.ApplyTorque( -3); secondFrameBody.ApplyTorque( -3); rearWheelToChainBody.ApplyTorque( -3); chainToFrontFrameBody.ApplyTorque( -3); topMiddleFrameBody.ApplyTorque( -3); } if (right) { middleCentreFrameBody.ApplyTorque( 3); gearBody.ApplyTorque( 3); firstFrameBody.ApplyTorque( 3); secondFrameBody.ApplyTorque( 3); rearWheelToChainBody.ApplyTorque( 3); chainToFrontFrameBody.ApplyTorque( 3); topMiddleFrameBody.ApplyTorque( 3); } // motor friction motorSpeed*=0.99; // motor max speed if (motorSpeed>100) { motorSpeed=100; } rearWheelJoint.SetMotorSpeed(motorSpeed); frontWheelJoint.SetMotorSpeed(motorSpeed); Any ideas what might be causing this? Thanks

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • Java game object pool management

    - by Kenneth Bray
    Currently I am using arrays to handle all of my game objects in the game I am making, and I know how terrible this is for performance. My question is what is the best way to handle game objects and not hurt performance? Here is how I am creating an array and then looping through it to update the objects in the array: public static ArrayList<VboCube> game_objects = new ArrayList<VboCube>(); /* add objects to the game */ while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { for (int i = 0; i < game_objects.size(); i++){ // draw the object game_objects.get(i).Draw(); game_objects.get(i).Update(); //world.updatePhysics(); } } I am not looking for someone to write me code for asset or object management, just point me into a better direction to get better performance. I appreciate the help you guys have provided me in the past, and I dont think I would be as far along with my project without the support on stack exchange!

    Read the article

  • How do I efficiently code both the client and server at the same time?

    - by liamzebedee
    I'm coding my game using a client-server model. When playing on singleplayer, the game starts a local server, and interacts with it just like a remote server (multiplayer). I have done this to avoid coding separate singleplayer and multiplayer code. I have just started coding and have encountered a major problem. Currently I'm developing the game in Eclipse, having all the game classes organized into packages. Then, in my server code, I just use all the classes in the client packages. The problem is, these client classes have variables that are specific to rendering, which obviously wouldn't be performed on a server. Should I create modified versions of the client classes to use in the server? Or should I just modify the client classes with a boolean, to indicate if its the client/server using it. Are there any other options I have? I just had a thought about maybe using the server class as the core class, then extending it with rendering stuff?

    Read the article

  • add collision detection to sprite?

    - by xBroak
    bassically im trying to add collision detection to the sprite below, using the following: self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") However it seems that when the collide is triggered it continuously prints 'collide' over and over when instead i want them to simply not be able to walk through the object, any help? def update(self, time_passed): """ Update the creep. time_passed: The time passed (in ms) since the previous update. """ if self.state == Creep.ALIVE: # Maybe it's time to change the direction ? # self._change_direction(time_passed) # Make the creep point in the correct direction. # Since our direction vector is in screen coordinates # (i.e. right bottom is 1, 1), and rotate() rotates # counter-clockwise, the angle must be inverted to # work correctly. # self.image = pygame.transform.rotate( self.base_image, -self.direction.angle) # Compute and apply the displacement to the position # vector. The displacement is a vector, having the angle # of self.direction (which is normalized to not affect # the magnitude of the displacement) # displacement = vec2d( self.direction.x * self.speed * time_passed, self.direction.y * self.speed * time_passed) self.pos += displacement # When the image is rotated, its size is changed. # We must take the size into account for detecting # collisions with the walls. # self.image_w, self.image_h = self.image.get_size() global bounds_rect bounds_rect = self.field.inflate( -self.image_w, -self.image_h) if self.pos.x < bounds_rect.left: self.pos.x = bounds_rect.left self.direction.x *= -1 elif self.pos.x > bounds_rect.right: self.pos.x = bounds_rect.right self.direction.x *= -1 elif self.pos.y < bounds_rect.top: self.pos.y = bounds_rect.top self.direction.y *= -1 elif self.pos.y > bounds_rect.bottom: self.pos.y = bounds_rect.bottom self.direction.y *= -1 self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") elif self.state == Creep.EXPLODING: if self.explode_animation.active: self.explode_animation.update(time_passed) else: self.state = Creep.DEAD self.kill() elif self.state == Creep.DEAD: pass #------------------ PRIVATE PARTS ------------------# # States the creep can be in. # # ALIVE: The creep is roaming around the screen # EXPLODING: # The creep is now exploding, just a moment before dying. # DEAD: The creep is dead and inactive # (ALIVE, EXPLODING, DEAD) = range(3) _counter = 0 def _change_direction(self, time_passed): """ Turn by 45 degrees in a random direction once per 0.4 to 0.5 seconds. """ self._counter += time_passed if self._counter > randint(400, 500): self.direction.rotate(45 * randint(-1, 1)) self._counter = 0 def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) global remainingCreeps remainingCreeps-=1 if remainingCreeps == 0: print("all dead")

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • How do I calculate the boundary of the game window after transforming the view?

    - by Cypher
    My Camera class handles zoom, rotation, and of course panning. It's invoked through SpriteBatch.Begin, like so many other XNA 2D camera classes. It calculates the view Matrix like so: public Matrix GetViewMatrix() { return Matrix.Identity * Matrix.CreateTranslation(new Vector3(-this.Spatial.Position, 0.0f)) * Matrix.CreateTranslation(-( this.viewport.Width / 2 ), -( this.viewport.Height / 2 ), 0.0f) * Matrix.CreateRotationZ(this.Rotation) * Matrix.CreateScale(this.Scale, this.Scale, 1.0f) * Matrix.CreateTranslation(this.viewport.Width * 0.5f, this.viewport.Height * 0.5f, 0.0f); } I was having a minor issue with performance, which after doing some profiling, led me to apply a culling feature to my rendering system. It used to, before I implemented the camera's zoom feature, simply grab the camera's boundaries and cull any game objects that did not intersect with the camera. However, after giving the camera the ability to zoom, that no longer works. The reason why is visible in the screenshot below. The navy blue rectangle represents the camera's boundaries when zoomed out all the way (Camera.Scale = 0.5f). So, when zoomed out, game objects are culled before they reach the boundaries of the window. The camera's width and height are determined by the Viewport properties of the same name (maybe this is my mistake? I wasn't expecting the camera to "resize" like this). What I'm trying to calculate is a Rectangle that defines the boundaries of the screen, as indicated by my awesome blue arrows, even after the camera is rotated, scaled, or panned. Here is how I've more recently found out how not to do it: public Rectangle CullingRegion { get { Rectangle region = Rectangle.Empty; Vector2 size = this.Spatial.Size; size *= 1 / this.Scale; Vector2 position = this.Spatial.Position; position = Vector2.Transform(position, this.Inverse); region.X = (int)position.X; region.Y = (int)position.Y; region.Width = (int)size.X; region.Height = (int)size.Y; return region; } } It seems to calculate the right size, but when I render this region, it moves around which will obviously cause problems. It needs to be "static", so to speak. It's also obscenely slow, which causes more of a problem than it solves. What am I missing?

    Read the article

  • Cropping a line (laser beam) in XNA

    - by electroflame
    I have a laser sprite that I wish to crop. I want to crop it so when it collides with an item, I can calculate the distance between the starting point, and the ending point, and only draw that. This eliminates the "overdraw" of a laser drawing past an item. Essentially, I'm trying to crop a line, but also keep that line "attached" to the nose of my ship. The line should not be drawn past the nose of my ship, that should be the starting point. There is no rotation to worry about. Currently, I thought that doing this through SpriteBatch would be best. This is my current Spritebatch code: spriteBatch.Draw(Laser.sprite, new Rectangle((int)Laser.position.X, (int)Laser.position.Y, Laser.sprite.Width, LaserHeight), new Rectangle(0, 0, (int)(Laser.sprite.Width), LaserHeight), new Color(255, 255, 255, (byte)MathHelper.Clamp(Laser.Alpha, 0, 255)), Laser.rotation, new Vector2(Laser.sprite.Width/2, LaserHeight/2), SpriteEffects.None, 0); But this doesn't quite work. It does only draw part of the sprite, but when LaserHeight is incremented, it lengthens the line in both ways! I believe this is due to some stupid error on my part with the Origin of the draw. Quick recap: I need to have my laser sprite drawn with the bottom of it at the nose of my ship, and then use LaserHeight to crop the image so only part of it is drawn. I have a feeling my explanation is a bit...lacking. So if you require more information, please say so and I will try to provide more information. Thanks in advance!

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • How to fetch only the sprites in the player's range of motion for collision testing? (2D, axis aligned sprites)

    - by Twodordan
    I am working on a 2D sprite game for educational purposes. (In case you want to know, it uses WebGl and Javascript) I've implemented movement using the Euler method (and delta time) to keep things simple. Now I'm trying to tackle collisions. The way I wrote things, my game only has rectangular sprites (axis aligned, never rotated) of various/variable sizes. So I need to figure out what I hit and which side of the target sprite I hit (and I'm probably going to use these intersection tests). The old fashioned method seems to be to use tile based grids, to target only a few tiles at a time, but that sounds silly and impractical for my game. (Splitting the whole level into blocks, having each sprite's bounding box fit multiple blocks I might abide. But if the sprites change size and move around, you have to keep changing which tiles they belong to, every frame, it doesn't sound right.) In Flash you can test collision under one point, but it's not efficient to iterate through all the elements on stage each frame. (hence why people use the tile method). Bottom line is, I'm trying to figure out how to test only the elements within the player's range of motion. (I know how to get the range of motion, I have a good idea of how to write a collisionCheck(playerSprite, targetSprite) function. But how do I know which sprites are currently in the player's vicinity to fetch only them?) Please discuss. Cheers!

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >