Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 452/1042 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Handling window resize with arbitrary aspect ratios

    - by DormoTheNord
    I'm currently making a 2D game using SFML. I want the aspect ratio to be maintained when the user resizes the window. I also want the game to work with any arbitrary aspect ratio (like any media player would). Here is the code I have so far: void os::GameEngine::setCameraViewport() { sf::FloatRect tempViewport; float viewAspectRatio = (float)aspectRatio.x / aspectRatio.y; float screenAspectRatio = (float)gameWindow.getSize().x / gameWindow.getSize().y; if (viewAspectRatio > screenAspectRatio) { // Viewport is wider than screen, fit on X } else if (viewAspectRatio < screenAspectRatio) { // Screen is wider than viewport, fit on Y } else // window aspect ratio matches view aspect ratio { tempViewport.height = 1; tempViewport.width = 1; tempViewport.left = 0; tempViewport.top = 0; } viewport = tempViewport; camera.setViewport(viewport); gameWindow.setView(camera); } The problem is I'm having trouble with the logic to determine the properties of the viewport.

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

  • Moving from XNA/C# to DirectX/C++ quite confused

    - by misiMe
    I made some game with XNA/C# for Windows Phone and Windows 8, since XNA is dead and Visual studio doesn't support it (I have to target Windows Phone 7.1 to build with XNA), I want to start learning something more "consistent in time" and improve my skills. I'm a little confused about the possibilities, because C++/DirectX alone seems difficult, so I found some high-level classes to help: DirectX Toolkit Cocos2D My questions are: What will happen when they will "die" like XNA? Is C++'s approces more "professional" than C#/XNA and why? Is C++'s approces more "portable"? Is C++'s approces more resistant in terms of time? Is there any consideration about DirectX TK and Cocos2D in terms of performance? I ask that because I found that every Game software house in my country looks for skilled C++ programmers.

    Read the article

  • Camera movement and threshold not working

    - by irish guy mcconagheh
    I have a platformer that is in progress, part of this has a camera which I only want to move when the character moves out of a certain threshold, to try to accomplish this I have the following if statement: if(((Mathf.Abs(target.transform.position.x))-(Mathf.Abs(transform.position.x)))>thres){ x = moveTo(transform.position.x, target.position.x, trackSpeed); } in unity/c#. In pseudocode it means if((absolute value of player x) - (absolute value of camera x) is greater than the threshold){ move { however this does not seem to work correctly. it appears to work for the first couple of times the threshold is reached, however the distance between the camera and the player has to increase every time for the camera to move. I do not believe the movement of the camera is the problem, however the code for it is as follows: private float moveTo(float n, float target, float accel) { if (n == target) { return n; } else { float dir = Mathf.Sign(target - n); n += accel * Time.deltaTime * dir; return (dir == Mathf.Sign(target-n))? n: target; } } }

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • How can I gain access to a player instance in a Minecraft mod?

    - by Andrew Graber
    I'm creating Minecraft mod with a pickaxe that takes away experience when you break a block. The method for taking away experience from a player is addExperience on EntityPlayer, so I need to get an instance of EntityPlayer for the player using my pickaxe when the pickaxe breaks a block, so that I can remove the appropriate amount of experience. My pickaxe class currently looks like this: public class ExperiencePickaxe extends ItemPickaxe { public ExperiencePickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } public boolean onBlockDestroyed(ItemStack par1ItemStack, World par2World, int par3, int par4, int par5, int par6, EntityLiving par7EntityLiving) { if ((double)Block.blocksList[par3].getBlockHardness(par2World, par4, par5, par6) != 0.0D) { EntityPlayer e = new EntityPlayer(); // create an instance e.addExperience(-1); } return true; } } Obviously, I cannot actually create a new EntityPlayer since it is an abstract class. How can I get access to the player using my pickaxe?

    Read the article

  • How can I deal with actor translations and other "noise" in third-party motion capture data?

    - by Charles
    I'm working on a game, and I've run into a problem with motion capture data. My team is using 3DS Max 2011 and trying to put free motion capture files on our models. The problem we're having is it has become extremely hard to find motion capture data that stays in place. We've found some great motion captures of things like walking and jumping but the actors themselves move within the data, so when we attach these animations to our models and bring them into XNA, the models walk forward even when they should technically be standing still (and then there's also the problem of them resetting at the end of the animation). How can we clean up, at runtime or asset-processing time, the animation in these motion capture files?

    Read the article

  • Formula for three competing heroes, each has one they can beat and one they're beaten by

    - by Georgiadis Abraam
    I am trying to design a game for a project I have, The main idea is: 3 Types of heroes 3 Stats per hero There are no levels involved so the differences must be located on stats. Fight logic - The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week I am trying to find a stats based formula that will allow me to fix this but I can't, I was meddling with numbers yesterday and it was decent but I can't extract the formula out of it. Could you please guide me or give me hints on how should I start creating formulas on a Non lvl game that fulfills the fight logic?

    Read the article

  • Interpolation gives the appearance of collisions

    - by Akroy
    I'm implementing a simple 2D platformer with a constant speed update of the game logic, but with the rendering done as fast as the machine can handle. I interpolate positions between actual game updates by just using the position and velocity of objects at the last update. This makes things look really smooth in general, but when something hits a wall/floor, it appears to go through the wall for a moment before being positioned correctly. This is because the interpolator is not taking walls into account, so it guesses the position into walls until the actual game update fixes it. Are there any particularly elegant solutions for this? Simply increasing the update rate seems like a band-aid solution, and I'm trying to avoid increasing the system reqs. I could also check for collisions in the actual interpolator, but that seems like heavy overhead, and then I'm no longer dividing the drawing and the game updating.

    Read the article

  • How to categorize textures into atlases

    - by Esa
    I am going to use texture atlasing for the first time in my games, and at first it seemed like a great idea to split textures into atlases by categorizing them by terrain themes e.g ForestTextures, WinterTextures etc. But that could cause a problem when for example a flower has to use transparency shader and other models use a diffuse shader. So those cannot be atlased into the same texture. Thus, would atlasing textures into themes as mentioned before and then splitting them by shader like ForestDiffuse and ForestTransparent be good? Or is there a better way to categorize and build them?

    Read the article

  • Game programming basics under Windows

    - by dreta
    I've been trying to learn some Windows programming using the Win32 API. Now, i'm used to working with the OS layer being abstracted away, mostly thanks to libraries like SFML or Allegro. Could you guys help me out and tell me if i'm thinking right here. The place for my gameloop is where i'm reading the messages? while (TRUE) { if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break ; TranslateMessage (&msg) ; DispatchMessage (&msg) ; } else { //my game loop goes here } } Now the slightly bigger issue, that is, drawing. Do i run my drawing where i normaly do it, inside the game loop after the game logic? Or do i do it when WM_PAIN is being called and just call InvalidateRect (hwnd, NULL, TRUE); when i want to draw? This does feel weird, the WM_PAINT is a queued message, so i don't know for sure when it'll be called. So if i wanted to avoid this, do i just get the device handle inside the game loop and only ValidateRect (hwnd, NULL); in the WM_PAINT case (beside the ValidateRect (hwnd, NULL); called after drawing in the game loop)? Actually, now that i think about it, do i even need WM_PAINT in this situation or can i skip it and let DefWindowProc handle it (does it validate the screen if WM_PAINT isn't processed)? If this is any important, i'm setting up my code for OpenGL.

    Read the article

  • Multiple Vertex Buffers per Mesh

    - by Daniel
    I've run into the situation where the size of my mesh with all its vertices and indices, is larger than the (optimal) vertex buffer object upper limit (~8MB). I was wondering if I can sub-divide the mesh across multiple vertex buffers, and somehow retain validity of the indices. Ie a triangle with a indice at the first vertex, and an indice at the last (ie in seperate VBOs). All the while maintaining this within Vertex Array Objects. My thoughts are, save myself the hassle, and for meshes (messes :P) such as this, just use the necessary size ( 8MB); which is what I do at the moment. But ideally my buffer manager (wip) at the moment is using optimal sizes; I may just have to make a special case then... Any ideas? If necessary, a simple C++ code example is appreciated. Note: I have also cross-posted this on stackoverflow, as I was not sure as to which it would be more suitable (its partly a design question).

    Read the article

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • New to CG shader programming, what program should I use to write and test them?

    - by Notbad
    I have started witting some shaders. First ones were fairly easy to write in notepad but now I need something with a bit more meat. I have checked rendermonnkey that seems to support CG but it is really old and don't know if it is a good option. On the other hand there exist this FX Composer 2.0 but it seems somthing that could really distract me from learning shaders because it seems a pretty deep program. Are there any other possibilities? There's a really nice alternative to write shaders named ShaderToy but just supports GLSL. Any information will be really welcomed. Thanks in advance.

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • How to modify VBO data

    - by Romeo
    I am learning LWJGL so i can start working on my game. In order to learn LWJGL I got the idea to implement the map builder so I can get comfortable with graphics programming. Now, for the map creation tool I need to draw new elements or draw the old one's with different coordinates. Let me explain this: My game will be a 2D scroller. The map will be consisting of multiple rectangles ( 2 strip triangles). When I click my left-mouse button i want to start the rectangle and when I release it I want to stop the rectangle bottom-right at that position. As I want to use VBOs I want to know how to modify data inside the VBO based on user input. Should i have a copy of a vertex array and then add the whole array to the VBO at each user input? How is usually implemented the VBO update?

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Generating grammatically correct MUD-style attack descriptions

    - by Extrakun
    I am currently working on a text based game, where the outcome of a combat round goes something like this %attacker% inflicts a serious wound (12 points damage) on %defender% Right now, I just swap %attacker% with the name of the attacker, and %defender% for the name of the defender. However, the description works, but don't read correctly. Since the game is just all text, I don't want to resort to generic descriptions (Such as "You use Attack on Goblin for 5 damage", which arguably solve the problem) How do I generate correct descriptions for cases where %attacker% refers to "You", the player? "You inflicts..." is wrong "Bees", or other plural? I need somehow to know I should prefix the name with a "The " If %attacker% is a generic noun, such as "Goblin", it will read weird as opposed to %attacker% being a name. Compare "Goblin inflicts..." vs. "Aldraic Swordbringer inflicts...." How does text-based games usually resolve such issues?

    Read the article

  • Object-Oriented OpenGL

    - by Sullivan
    I have been using OpenGL for a while and have read a large number of tutorials. Aside from the fact that a lot of them still use the fixed pipeline, they usually throw all the initialisation, state changes and drawing in one source file. This is fine for the limited scope of a tutorial, but I’m having a hard time working out how to scale it up to a full game. How do you split your usage of OpenGL across files? Conceptually, I can see the benefits of having, say, a rendering class that purely renders stuff to screen, but how would stuff like shaders and lights work? Should I have separate classes for things like lights and shaders?

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >