Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 451/1332 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • How can I use iteration to lead targets?

    - by e100
    In my 2D game, I have stationary AI turrets firing constant speed bullets at moving targets. So far I have used a quadratic solver technique to calculate where the turret should aim in advance of the target, which works well (see Algorithm to shoot at a target in a 3d game, Predicting enemy position in order to have an object lead its target). But it occurs to me that an iterative technique might be more realistic (e.g. it should fire even when there is no exact solution), efficient and tunable - for example one could change the number of iterations to improve accuracy. I thought I could calculate the current range and thus an initial (inaccurate) bullet flight time to target, then work out where the target would actually be by that time, then recalculate a more accurate range, then recalculate flight time, etc etc. I think I am missing something obvious to do with the time term, but my aimpoint calculation does not currently converge after the significant initial correction in the first iteration: import math def aimpoint(iters, target_x, target_y, target_vel_x, target_vel_y, bullet_speed): aimpoint_x = target_x aimpoint_y = target_y range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) time_to_target = range / bullet_speed time_delta = time_to_target n = 0 while n <= iters: print "iteration:", n, "target:", "(", aimpoint_x, aimpoint_y, ")", "time_delta:", time_delta aimpoint_x += target_vel_x * time_delta aimpoint_y += target_vel_y * time_delta range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) new_time_to_target = range / bullet_speed time_delta = new_time_to_target - time_to_target n += 1 aimpoint(iters=5, target_x=0, target_y=100, target_vel_x=1, target_vel_y=0, bullet_speed=100)

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Using XNA for a 2D isometric game, but wanna move on

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • Drawing isometric map in canvas / javascript

    - by Dave
    I have a problem with my map design for my tiles. I set player position which is meant to be the middle tile that the canvas is looking at. How ever the calculation to put them in x:y pixel location is completely messed up for me and i don't know how to fix it. This is what i tried: var offset_x = 0; //used for scrolling on x var offset_y = 0; //used for scrolling on y var prev_mousex = 0; //for movePos function var prev_mousey = 0; //for movePos function function movePos(e){ if (prev_mousex === 0 && prev_mousey === 0) { prev_mousex = e.pageX; prev_mousey = e.pageY; } offset_x = offset_x + (e.pageX - prev_mousex); offset_y = offset_y + (e.pageY - prev_mousey); prev_mousex = e.pageX; prev_mousey = e.pageY; run = true; } player_posx = 5; player_posy = 55; ct = 19; for (i = (player_posx-ct); i < (player_posx+ct); i++){ //horizontal for (j=(player_posy-ct); j < (player_posy+ct); j++){ // vertical //img[0] is 64by64 but the graphic is 64by32 the rest is alpha space var x = (i-j)*(img[0].height/2) + (canvas.width/2)-(img[0].width/2); var y = (i+j)*(img[0].height/4); var abposx = x - offset_x; var abposy = y - offset_y; ctx.drawImage(img[0],abposx,abposy); } } Now based on these numbers the first render-able tile is I = 0 & J = 36. As numbers in the negative are not in the array. But for I=0 and J= 36 the position it calculates is : -1120 : 592 Does any one know how to center it to canvas view properly?

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • Does Unity's "Transparent Bumped Specular" translate to "semi-shiny must be semi-transparent"?

    - by Shivan Dragon
    Unity's documentation for the "Transparent Bumped Specular" shader/material-type is simply a concatenation of each of the descriptions for its Transparent and Specular Shaders (and also Bumped, but that doesn't apply to the question): Transparent Properties This shader can make mesh geometry partially or fully transparent by reading the alpha channel of the main texture. In the alpha, 0 (black) is completely transparent while 255 (white) is completely opaque. If your main texture does not have an alpha channel, the object will appear completely opaque. (...) Specular Properties (...) Additionally, the alpha channel of the main texture acts as a Specular Map (sometimes called "gloss map"), defining which areas of the object are more reflective than others. Black areas of the alpha will be zero specular reflection, while white areas will be full specular reflection. To me this translates to: I have a mesh representig a car tire The texture need to be very shiny on the rims parts, and almost not shiny at all for the rubber parts Also since the rim is really complex, (with like cut-out decoretions and such), I will not build that into the mesh, but fake it with transparency in the texture I can't do all this using Unity's "Transparent Bumped Specular" shader, because the "rubber" part of the texture will become semi transparent due to me painting the alpha channel dark-grey (because I want it to also be less shiny). Is this correct? If not, how can I make this work?

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • Unity: parallel vectors and cross product, how to compare vectors

    - by Heisenbug
    I read this post explaining a method to understand if the angle between 2 given vectors and the normal to the plane described by them, is clockwise or anticlockwise: public static AngleDir GetAngleDirection(Vector3 beginDir, Vector3 endDir, Vector3 upDir) { Vector3 cross = Vector3.Cross(beginDir, endDir); float dot = Vector3.Dot(cross, upDir); if (dot > 0.0f) return AngleDir.CLOCK; else if (dot < 0.0f) return AngleDir.ANTICLOCK; return AngleDir.PARALLEL; } After having used it a little bit, I think it's wrong. If I supply the same vector as input (beginDir equal to endDir), the cross product is zero, but the dot product is a little bit more than zero. I think that to fix that I can simply check if the cross product is zero, means that the 2 vectors are parallel, but my code doesn't work. I tried the following solution: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross == Vector.zero) return AngleDir.PARALLEL; And it doesn't work because comparison between Vector.zero and cross is always different from zero (even if cross is actually [0.0f, 0.0f, 0.0f]). I tried also this: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross.magnitude == 0.0f) return AngleDir.PARALLEL; it also fails because magnitude is slightly more than zero. So my question is: given 2 Vector3 in Unity, how to compare them? I need the elegant equivalent version of this: if (beginDir.x == endDir.x && beginDir.y == endDir.y && beginDir.z == endDir.z) return true;

    Read the article

  • Good book or tutorial for learning how to apply integration methods

    - by Cumatru
    I'm looking to animate a graph layout using edges as springs and nodes as weights ( a node with more links will have a bigger weight ). I'm not capable of wrapping my head around the usage of mathematical and physics relations in my application. As far as i read, Runge Kutta 4 ( preferably ) or Verlet will be a good choice, but i have problems with understanding how they really work, and what physics equations should i apply. If i can't understand them, i can't use them. I'm looking for a book or a tutorial which describe the things that i need.

    Read the article

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Camera closes in on the fixed point

    - by V1ncam
    I've been trying to create a camera that is controlled by the mouse and rotates around a fixed point (read: (0,0,0)), both vertical and horizontal. This is what I've come up with: camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateRotationY(camRotYFloat)); Vector3 customAxis = new Vector3(-camera.Eye.Z, 0, camera.Eye.X); camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateFromAxisAngle(customAxis, camRotXFloat * 0.0001f)); This works quit well, except from the fact that when I 'use' the second transformation (go up and down with the mouse) the camera not only goes up and down, it also closes in on the point. It zooms in. How do I prevent this? Thanks in advance.

    Read the article

  • android: How to apply pinch zoom and pan to 2D GLSurfaceView

    - by mak_just4anything
    I want to apply pinch zoom and panning effect on GLSurfaceView. It is Image editor, so It would not be 3D object. I tried to implement using these following links: https://groups.google.com/forum/#!topic/android-developers/EVNRDNInVRU Want to apply pinch and zoom to GLSurfaceView(3d Object) http://www.learnopengles.com/android-lesson-one-getting-started/ These all are links for 3D object rendering. I can not use ImageView as I need to work out with OpenGL so, had to implement it on GLSurfaceView. Suggest me or any reference links are there for such implementation. **I need it for 2D only.

    Read the article

  • Playing repeated sound in Java

    - by Diogo Schneider
    I'm trying to play sounds in a Java game with the following code: AudioStream audioStream = new AudioStream(stream); AudioPlayer.player.start(audioStream); The stream variable is just an InputStream to the resource. By the first time this code is called, the sound is played as expected, but by the second time the program just hangs, not even an exception is thrown. I don't know what's going on or how to prevent this. If I try closing either stream or audioStream after the above code, the program doesn't hang, but no sound is ever played at all. Any tips are welcome, thanks.

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • Game programming in C++ [closed]

    - by Asaf
    I am a new programmer. I know C++ quite well and I know C# very good. I'm really eager to learn how to program games well and I cant really find where to start learning from. I have never developed any graphics in C++ , only a crappy game with windows forms graphics. I'm really into game programming and hoping I can get employed in it in the future. I'd be glad to have some advice about this. Thanks in advance, Asaf

    Read the article

  • Asset missing problem XNA

    - by ChocoMan
    I'm using VS2010 with XNA 4.0 and I'm trying to load an FBX model with texture on the screen. The problem I'm having is this error: Missing Asset: C:\Users\ChocoMan\Documents\Visual Studio 2010\Projects\XNAGame\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp but the actual path to the texture is C:\Users\ChocoMan\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp Also, when I linked the texture in Maya, I used the above address. Does anyone know why VS is looking for an incorrect address that doesnt exist?

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Should NPC dialog be stored in XML or in a script?

    - by Andrea Tucci
    I'm developing an action RPG with some friends. I would like to know the differences and pros/cons of making NPC's dialogue using a file in XMLformat instead of using a script. I see that script method is often used by game developers for NPC text, but is it better then a XML file? We've thought that a XML file with tags like <FirstText>[text1]<SecondText>[text2] et cetera is perfect for NPC text and also for possible quests to give the player. So what are the differences between this two methods? Is a script suitable for this aim?

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >