Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 544/1328 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • I need help with a timer for a text based game, i need to include a mysql query to it, but not sure how.

    - by Hijumper
    i would like to add a mysql query somewhere in my timer code so that everytime it restarts then 1 item would be added to the database, i can get it to show how many items you have gotten since the timer has been running, but im not quite sure how to add it into a mysql database, any help would be appreciated :D heres my timer code thus far: <head> <script type="text/javascript"> var c=10; var mineCount = 0; var t; var timer_is_on=0; function timedCount() { document.getElementById('txt').value = c; c = c - 1; if (c <= -1) { mineCount++; var _message = "You have mined " + mineCount + " iron ore" + (((mineCount > 1) ? "s" : "") + "!"); document.getElementById('message').innerHTML = _message; startover(); } } function startover() { c = 10; clearTimeout(t); timer_is_on=0; doMining(); } function doMining() { if (!timer_is_on) { timer_is_on = true; t = setInterval(function () { timedCount(); }, 1000); } } </script> <SPAN STYLE="float:left"> <form> <input type="button" value="Mining" onClick="doMining()"> <input type="text" id="txt"> </form> </SPAN> <html> <center> <div id='message'></div>

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • How to calculate direction from initial point and another point?

    - by Dvole
    I'm making a simple game where I shoot things from a certain point on screen (A). I tap the screen and shoot the projectile from initial point(A) to the tap point(B). But I want the projectile to move along the same path instead and fly out of bounds of the screen. How do I calculate a point that is on the same line that these two points, but further away? This is a simple math, but I can't figure it out.

    Read the article

  • FBO rendering different result between Glaxay S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • System hangs at glReadPixel call with GL_TEXTURE_2D_ARRAY for texturing

    - by Roshan
    I am calling glReadPixel after glDrawArray call. I am rendering a geometry with 3D texture on it as a target GL_TEXTURE_2D_ARRAY. My systems hangs at glreadpixel call. When i use target as GL_TEXTURE_3D the issue does not occurs and it correctly reads the framebuffer contents. glReadPixels(0, 0, GetViewportWidth(), GetViewportHeight(), GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)rendered_pixels); I am using SNORM textures with GL_byte data in glTeximage3D call and I am not calling glPixelStorei, is it because of this? What should be the parameter for pixelstore call?

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • Will we see a trend of "3d" games coming up in the near future?

    - by Vish
    I've noticed that the trend of movies is diving into the world of movies with 3-dimensional camera.For me it provoked a thought as if it was the same feeling people got when they saw a colour movie for the first time, like in the transition from black and white to colour it is a whole new experience. For the first time we are experiencing the Z(depth) factor and I really mean when I said "experiencing". So my question is or maybe if not a question, but Is there a possibility of a genre of 3d camera games upcoming?

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • Need to produce an animated texture of Water where each image tiles in all directions

    - by ProfVersaggi
    I need to produce a 2D 'animated' texture of "water" for a game in which each image tiles in 'all' directions, much like those produced by the Caustics Generator, but with the power and flexibility of something the likes of Blender. The final result from Caustics Generator is 32 images that are actually animated such that when the full 32 images are played in a loop they will seamlessly loop forever. They will not only loop in time, but each image also tile in all directions. This is nice, but it comes in only one flavor so to speak. I'd like to accomplish the same thing with a Blender type tool, and I have actually gotten to the point where I generate the X number of images, but they do not tile in 'all' directions, nor are they slightly animated. I've tried Blender texture animations using offsets but with only limited success. Does anyone know of how to (or of a tool) which will animate textures such that they tile in all (4) directions? Many thanks in advance ....

    Read the article

  • Adding a short delay between bullets

    - by Sun
    I'm having some trouble simulating bullets in my 2D shooter. I want similar mechanics to Megaman, where the user can hold down the shoot button and a continues stream of bullets are fired but with a slight delay. Currently, when the user fires a bullet in my game a get an almost laser like effect. Below is a screen shot of some bullets being fired while running and jumping. In my update method I have the following: if(gc.getInput().isKeyDown(Input.KEY_SPACE) ){ bullets.add(new Bullet(player.getPos().getX() + 30,player.getPos().getY() + 17)); } Then I simply iterate through the array list increasing its x value on each update. Moreover, pressing the shoot button (Space bar) creates multiple bullets instead of just creating one even though I am adding only one new bullet to my array list. What would be the best way to solve this problem?

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • Prevent collisions between mobs/npcs/units piloted by computer AI : How to avoid mobile obstacles?

    - by Arthur Wulf White
    Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2: Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3:Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from "walking on top of one another" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse.

    Read the article

  • Problem with Update(GameTime) Methods and Pause implementation

    - by Adam
    I have the pause function implemented and it works correctly in that it dims the player screen and stops updating the gameplay. The problem is that GameTime continues to increase while it is paused, so my method that checks gameTime versus previousSpawnTime before spawning another enemy gets messed up and if the game is paused too long it is noticeable that the next enemy draws far too early. Here is my code for the enemy update. private void UpdateEnemies(GameTime gameTime) { // Spawn a new enemy every 1.5 seconds if (gameTime.TotalGameTime - previousSpawnTime > enemySpawnTime) { previousSpawnTime = gameTime.TotalGameTime; // Add an Enemy AddEnemy(); } ... I also have other methods that depend on gameTime. I've tried getting the total pause time and subtracting that from the total game time, but I can't seem to get it to work correctly if that is the way I should go about solving this. If you need to see any other code let me know. Thank you.

    Read the article

  • Rotating a Quad around it center

    - by Trixmix
    How can you rotate a quad around its center? This is what im trying to do but it aint working: GL11.glTranslatef(x-getWidth()/2, y-getHeight()/2, 0); GL11.glRotatef(30, 0.0f, 0.0f, 1.0f); GL11.glTranslatef(x+getWidth()/2, y+getHeight()/2, 0); DRAW my main problem is that it renders it off the screen.. draw code: GL11.glBegin(GL11.GL_QUADS); { GL11.glTexCoord2f(0, 0); GL11.glVertex2f(0, 0); GL11.glTexCoord2f(0, getTexture().getHeight()); GL11.glVertex2f(0, height); GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()); GL11.glVertex2f(width,height); GL11.glTexCoord2f(getTexture().getWidth(), 0); GL11.glVertex2f(width,0); } GL11.glEnd();

    Read the article

  • game engine done, ideas missing

    - by Thoms
    I read at many places how people have this GREAT ideas but are not able to program themselves. I have quite the opposite problem. I have developed game engine, level editor, embedded Lua scripting language, I have even made wrapper for Android and it all works well. But I have no good idea about how to proceed with actual levels; I have no good ideas. The engine itself is very generic and can be used in many game concepts, but I just cannot think of anything useful. Do you have any thoughts on how to proceed? Where should I seek ideas? Who should I ask? I am sorry if this question is a duplicate.

    Read the article

  • Why wont the LibGDX's main class Initialize on Android Launcher?

    - by BluFire
    So I was searching for different ways that could suit me in programming and came across LibGDX. Naturally I looked at the tutorial. As I was doing it, I was following the steps word for word, except naming the classes. In the end, I was able to create the desktop launcher for the game but not the android launcher. The following error is my error: Cannot instantiate the type Game (Game is the name of the class) I got the tutorial from http://steigert.blogspot.com.au/2012/02/1-libgdx-tutorial-introduction.html The link in the tutorial is the original but it uses jogl instead of lwjgl.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • XNA GameTime TotalGameTime slower than real time

    - by robasaurus
    I have set-up an empty test project consisting of a System.Diagnostics.Stopwatch and this in the draw method: spriteBatch.DrawString(font, gameTime.TotalGameTime.TotalSeconds.ToString(), new Vector2(100, 100), Color.White); spriteBatch.DrawString(font, stopwatch.Elapsed.TotalSeconds.ToString(), new Vector2(100, 200), Color.White); The GameTime.TotalGameTime displayed is slower than the stop watch (by about 5 seconds per minute) even though GameTime.IsRunningSlowly is always false, why is this? The reason this is an issue is because I have a server which uses stopwatch and it is faster than my client game. For instance my client notifies the server it has dropped a mine which explodes in one minute. Because the stopwatch is faster the server state explodes the mine before the client and they are out of sync. I don't want to have to notify the client when the server explodes it as this would use unnecessary bandwidth.

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Algorithm for procedural city generation?

    - by Zove Games
    I am planning on making a (simple) procedural city generator using Java. I need ideas on whan algorithm to use for the layout, and the actual buildings. The city will mostly have skyscrapers, not really much complex stuff. For the layout I already have a simple algorithm implemented: Create a Map with java.awt.Point keys and Integer values. Fill it with all the points in the city's bounds with the value as -1 (unnassigned) Shuffle the map, and assign the 1st 10 of the keys IDs (from 1-10) Loop until all points have IDs: Loop though all points: Assign points next to an assigned point IDs of the point next to them, if 2 or more points border the point, then randomly choose which ID the point will get. You will end up with 10 random regions. Make roads bordering these regions. Fill the inside of each region with a randomly spaced and randomly rotated grid PROBLEM: This is not the fastest way to do it. What algorithm should I use for the layout. And what should I use to make each building's design? I don't even know how I'm going to do that yet (fractals maybe). I just need some ideas, not actual code.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >