Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 391/1042 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Monogame - Input secuence game (Scripting?)

    - by user2662567
    I'm starting to program my very first game, it's a clone of DDR/Stepmania done for research purposes and learning. I (at this early stage) get most of the UI/Music/input work that should be done, but what i still can't grasp is scripting, i've read about Lua and that you shouldn't use it with XNA/Monogame as C# is capable enough, but i cannot get the utility of it. Assuming the needs of my game, ¿What would be the ideal way to implement the input secuences it needs?, i thought of XML/Json, let's say Stage 1 <game> <level id="1"> <step id="1" key="up" time="00:00:01"/> <step id="2" key="left" time="00:00:02"/> </level> </game> Is that a correct implementation? or are there better ways with more benefits?

    Read the article

  • How can I test if an oriented rectangle contains another oriented rectangle?

    - by gronzzz
    I have the following situation: To detect whether is the red rectangle is inside orange area I use this function: - (BOOL)isTile:(CGPoint)tile insideCustomAreaMin:(CGPoint)min max:(CGPoint)max { if ((tile.x < min.x) || (tile.x > max.x) || (tile.y < min.y) || (tile.y > max.y)) { NSLog(@" Object is out of custom area! "); return NO; } return YES; } But what if I need to detect whether the red tile is inside of the blue rectangle? I wrote this function which uses the world position: - (BOOL)isTileInsidePlayableArea:(CGPoint)tile { // get world positions from tiles CGPoint rt = [[CoordinateFunctions shared] worldFromTile:ccp(24, 0)]; CGPoint lb = [[CoordinateFunctions shared] worldFromTile:ccp(24, 48)]; CGPoint worldTile = [[CoordinateFunctions shared] worldFromTile:tile]; return [self isTile:worldTile insideCustomAreaMin:ccp(lb.x, lb.y) max:ccp(rt.x, rt.y)]; } How could I do this without converting to the global position of the tiles?

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Smarphone Apps. music, licenses and fees .. nightmare

    - by mm24
    I have recently asked a question about music in games like Guitar Hero. I have found that that in Europe (at least) if I do want to use a track composed by a musician member of a royalty collecting society I need to pay a flat fee to the society and not only to the member. So a "one-to-one" agreement is not valid and the society can come up to me and ask me for money for each download. Even if for FREE! This is a fee sheet list of the UK agency: for fee, see "Permanent download services" It is about 1,200 GBP for less than 22,000 copies and they DON'T specify anything more and they said me on the phone that I need to wait and see how many downloads I get before knowing the price. This is kind of crazy as If I give away the App for free I will have to PAY 1,200 GBP!! I am shocked and I feel very bad. One agency suggested me to use a fake name of the artist, but in this way is not fair to my collaborators as what they hope is that the App gets lots of downloads and in this way that other people will get to know about them and hopefully commission them more work. The other solution is to work only with non registered musicians. The question here to you is.. has anyone found a legal way to do use music from registered authors in a game?

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Organising levels / rooms in a MUD-style text based world

    - by Polynomial
    I'm thinking of writing a small text-based adventure game, but I'm not particularly sure how I should design the world from a technical standpoint. My first thought is to do it in XML, designed something like the following. Apologies for the huge pile of XML, but I felt it important to fully explain what I'm doing. <level> <start> <!-- start in kitchen with empty inventory --> <room>Kitchen</room> <inventory></inventory> </start> <rooms> <room> <name>Kitchen</name> <description>A small kitchen that looks like it hasn't been used in a while. It has a table in the middle, and there are some cupboards. There is a door to the north, which leads to the garden.</description> <!-- IDs of the objects the room contains --> <objects> <object>Cupboards</object> <object>Knife</object> <object>Batteries</object> </objects> </room> <room> <name>Garden</name> <description>The garden is wild and full of prickly bushes. To the north there is a path, which leads into the trees. To the south there is a house.</description> <objects> </objects> </room> <room> <name>Woods</name> <description>The woods are quite dark, with little light bleeding in from the garden. It is eerily quiet.</description> <objects> <object>Trees01</object> </objects> </room> </rooms> <doors> <!-- a door isn't necessarily a door. each door has a type, i.e. "There is a <type> leading to..." from and to are references the rooms that this door joins. direction specifies the direction (N,S,E,W,Up,Down) from <from> to <to> --> <door> <type>door</type> <direction>N</direction> <from>Kitchen</from> <to>Garden</to> </door> <door> <type>path</type> <direction>N</direction> <from>Garden</type> <to>Woods</type> </door> </doors> <variables> <!-- variables set by actions --> <variable name="cupboard_open">0</variable> </variables> <objects> <!-- definitions for objects --> <object> <name>Trees01</name> <displayName>Trees</displayName> <actions> <!-- any actions not defined will show the default failure message --> <action> <command>EXAMINE</command> <message>The trees are tall and thick. There aren't any low branches, so it'd be difficult to climb them.</message> </action> </actions> </object> <object> <name>Cupboards</name> <displayName>Cupboards</displayName> <actions> <action> <!-- requirements make the command only work when they are met --> <requirements> <!-- equivilent of "if(cupboard_open == 1)" --> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>EXAMINE</command> <!-- fail message is the message displayed when the requirements aren't met --> <failMessage>The cupboard is closed.</failMessage> <message>The cupboard contains some batteires.</message> </action> <action> <requirements> <require operation="equal" value="0">cupboard_open</require> </requirements> <command>OPEN</command> <failMessage>The cupboard is already open.</failMessage> <message>You open the cupboard. It contains some batteries.</message> <!-- assigns is a list of operations performed on variables when the action succeeds --> <assigns> <assign operation="set" value="1">cupboard_open</assign> </assigns> </action> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>CLOSE</command> <failMessage>The cupboard is already closed.</failMessage> <message>You closed the cupboard./message> <assigns> <assign operation="set" value="0">cupboard_open</assign> </assigns> </action> </actions> </object> <object> <name>Batteries</name> <displayName>Batteries</displayName> <!-- by setting inventory to non-zero, we can put it in our bag --> <inventory>1</inventory> <actions> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>GET</command> <!-- failMessage isn't required here, it'll just show the usual "You can't see any <blank>." message --> <message>You picked up the batteries.</message> </action> </actions> </object> </objects> </level> Obviously there'd need to be more to it than this. Interaction with people and enemies as well as death and completion are necessary additions. Since the XML is quite difficult to work with, I'd probably create some sort of world editor. I'd like to know if this method has any downfalls, and if there's a "better" or more standard way of doing it.

    Read the article

  • Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or playery increments by one at every iteration until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or playery keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • How to position a sprite in a 2D animation skeleton?

    - by Paul Manta
    Given two joints that define a bone, I would like to know how to decide where, between those two joints, I should draw the sprite. This should be a fairly simple thing to solve, but there is one thing that I am not sure about. After I've determined the rotation of the sprite (which is the absolute angle the joints form with the x-axis), I also need to determine the origin point from where I need to start drawing the transformed image. So how should I position the sprite between the two joints? Should I make the center of the image be the midpoint between the two joints, or should I make one the of the joints be the origin? Do these things matter that much (could the wrong positioning make the sprite move oddly during the animation)?

    Read the article

  • How to optimize collision detection

    - by Niklas
    I am developing a 2D Java Game with LibGDX. This is what it kinda looks like (simplified): The big black circle is the player, which you can move by tilting the smartphone. The red circles and blue rectangles are enemies, which will move from the right of the screen to the left. The player has to avoid crashing into them. Right now I am checking in the Game Loop every enemy against the player, whether they collide or not. This seems kinda inefficient to me, but I don't know how to improve it. I have tried the Quadtree approach, but it did not really work. The player could easily glitch through enemies and the collision was not detected. Unfortunately, I have destroyed the Quadtree implementation. I used this [tutorial/blog] as my Quadtree implementation(http://gamedevelopment.tutsplus.com/tutorials/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space--gamedev-374).

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • How do you turn a cube into a sphere?

    - by Tom Dalling
    I'm trying to make a quad sphere based on an article, which shows results like this: I can generate a cube correctly: But when I convert all the points according to this formula (from the page linked above): x = x * sqrtf(1.0 - (y*y/2.0) - (z*z/2.0) + (y*y*z*z/3.0)); y = y * sqrtf(1.0 - (z*z/2.0) - (x*x/2.0) + (z*z*x*x/3.0)); z = z * sqrtf(1.0 - (x*x/2.0) - (y*y/2.0) + (x*x*y*y/3.0)); My sphere looks like this: As you can see, the edges of the cube still poke out too far. The cube ranges from -1 to +1 on all axes, like the article says. Any ideas what is wrong?

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • Using Shader causes triangle to disappear

    - by invisal
    The following is my rendering code. Private Sub GameRender() GL.Clear(ClearBufferMask.ColorBufferBit + ClearBufferMask.DepthBufferBit) GL.ClearColor(Color.SkyBlue) GL.UseProgram(theProgram) GL.EnableClientState(ArrayCap.VertexArray) GL.EnableClientState(ArrayCap.ColorArray) GL.BindBuffer(BufferTarget.ArrayBuffer, vertexPositionID) GL.DrawArrays(BeginMode.Triangles, 0, 3) GL.DisableClientState(ArrayCap.ColorArray) GL.DisableClientState(ArrayCap.VertexArray) GlControl1.SwapBuffers() End Sub This is screenshot without GL.UseProgram(theProgram) This is screenshot with GL.UseProgram(theProgram) Here are my shader code that I picked from online tutorial. Vertex Shader #version 330 layout(location = 0) in vec4 position; void main() { gl_Position = position; } Fragment Shader #version 330 out vec4 outputColor; void main() { outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); } These are my shader creation code. '' Initialize Shader Dim shaderList(1) As Integer shaderList(0) = CreateShader(ShaderType.VertexShader, strVertexShader) shaderList(1) = CreateShader(ShaderType.FragmentShader, strFragShader) theProgram = CreateProgram(shaderList) GL.DeleteShader(shaderList(0)) GL.DeleteShader(shaderList(1)) Here are my helper functions Private Function CreateShader(ByVal shaderType As ShaderType, ByVal code As String) Dim shader As Integer = GL.CreateShader(shaderType) GL.ShaderSource(shader, code) GL.CompileShader(shader) Dim status As Integer GL.GetShader(shader, ShaderParameter.CompileStatus, status) If status = False Then MsgBox(GL.GetShaderInfoLog(shader)) End If Return shader End Function Private Function CreateProgram(ByVal shaderList() As Integer) As Integer Dim program As Integer = GL.CreateProgram() For i As Integer = 0 To shaderList.Length - 1 GL.AttachShader(program, shaderList(i)) Next GL.LinkProgram(program) Dim status As Integer GL.GetProgram(program, ProgramParameter.LinkStatus, status) If status = False Then MsgBox(GL.GetProgramInfoLog(program)) End If For i As Integer = 0 To shaderList.Length - 1 GL.DetachShader(program, shaderList(i)) Next Return program End Function

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Greiner-Hormann clipping problem

    - by Belgin
    I have a set of planar polygons in 3D space defined by their vertices in counterclockwise order. Let's define the 'positive face' as being the face of the 3D polygon such as when observed, the vertices appear in counterclockwise order, and the 'negative face', the face which when observed, the vertices appear in clockwise order. I'm doing perspective projection of the set of polygons onto a projection polygon defined by the points in this order: (0, h, 0), (0, 0, 0), (w, 0, 0), and (w, h, 0), where w and h are strictly positive integers. The positive face of this projection polygon is oriented towards positive Z, and the camera point is somewhere at (0, 0, d), where d is a strictly negative number. In order to 'clip' the projected polygons into the projection polygon, I'm applying the Greiner-Hormann (PDF) clipping algorithm, which requires that the clipper and the to-be-clipped polygons be in the same order (i.e. clockwise or counterclockwise). My question is the following: How can I determine whether the projected face of the 3D polygon is the negative or the positive one? Meaning, how do I find out if I have to work with the vertices in normal or inverted order for the algorithm to work? I noticed that only if the 3D polygon is facing the projection polygon with its negative face, both of them are in the same order (counterclockwise), otherwise, a modification needs to be done. Here is a picture (PNG) that illustrates this. Note that the planes described by the polygon from the set and the projection polygon may not always be parallel.

    Read the article

  • How to read BC4 texture in GLSL?

    - by Question
    I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL_COMPRESSED_RED_RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do : uniform sampler2D TextureUnit; void main() { vec4 TexColor = texture2D(TextureUnit, vec2(gl_TexCoord[0])); (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ?

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites?

    - by Alex Ames
    I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this: either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.

    Read the article

  • Applying effects to an existing program that uses BasicEffect

    - by Fibericon
    Using the finished product from the tutorial here. Is it possible to apply the grayscale effect from here: Making entire scene fade to grayscale Or would you basically have to rewrite everything? EDIT: It's doing something now, but the whole grayscale seems extremely blue. It's like I'm looking at it through dark blue sunglasses. Here's my draw function: protected override void Draw(GameTime gameTime) { device.SetRenderTarget(renderTarget); graphics.GraphicsDevice.Clear(Color.CornflowerBlue); //Drawing models, bullets, etc. device.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, grayScale); Texture2D temp = (Texture2D)renderTarget; grayScale.Parameters["coloredTexture"].SetValue(temp); grayScale.CurrentTechnique = grayScale.Techniques["Grayscale"]; foreach (EffectPass pass in grayScale.CurrentTechnique.Passes) { pass.Apply(); } spriteBatch.Draw(temp, new Vector2(GraphicsDevice.PresentationParameters.BackBufferWidth/2, GraphicsDevice.PresentationParameters.BackBufferHeight/2), null, Color.White, 0f, new Vector2(renderTarget.Width/2, renderTarget.Height/2), 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } Another edit: figured out what I was doing wrong. I have Blendstate.Additive in the spriteBatch.Draw() call. It should be Blendstate.Opaque, or it literally tries to add the blank blue image to the grayscale image.

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • Is there any existing (old) game that released graphic as free or open source?

    - by Alexey Petrushin
    I'd like to (re)create an online version (html5/JS) of some old game, for example something like HoM&M 2. Maybe, some of old games were released as free or open source (I'm interested in the graphical assets only)? I heard something about Red Alert been released as free, but I'm not sure if it's permitted to reuse graphical assets in such manner. Do You know such games? Another question - can You please share Your thoughts, rough estimate - how much it will cost to pay an artist to create graphics similar to HoM&M 2?

    Read the article

  • Game programming course materials: What should it include?

    - by Esa
    I am tasked to create the course materials for a game programming class, and I’d like your opinion on what aspects and areas of game programming, such as game state management, game object storing or simple AI, should I include in it? The course is intented to be the first step into game programming for students with novice skills in programming. There will be mathematics as well, but I found that there are multiple questions, with good answers, on that subject already.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >