Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 451/1022 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

  • using heightmap to simulate 3d in an isometric 2d game

    - by VaTTeRGeR
    I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed: How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

  • Projectiles in tile mapped turn-based tactics game?

    - by Petteri Hietavirta
    I am planning to make a Laser Squad clone and I think I have most of the aspects covered. But the major headache is the projectiles shot/thrown. The easy way would be to figure out the probability of hit and just mark miss/hit. But I want to be able to have the projectile to hit something eventually (collateral damage!). Currently everything is flat 2D tile map and there would be full (wall, door) and half height (desk, chair, window) obstacles. My idea is to draw an imaginary line from the shooter to the target and add some horizontal&vertical error based on the player skills. Then I would trace the modified path until it hits something. This is basically what the original Laser Squad seems to do. Can you recommend any algorithms or other approaches for this?

    Read the article

  • High CPU usage on Pong clone

    - by max
    I just made my first game, a clone of Pong, using OpenGL and C++. But its using ~50% of the CPU, which I guess is very high for a game like this. How can I improve that? Can you please look up my code and tell me what all things I am doing wrong? Any feedback is welcome. http://pastebin.com/L5zE3axh Also it would be extremely helpful if you give some general points on how to develop games in OpenGL efficiently.. Thanks in advance!

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • Water Simulation in LIBGDX [on hold]

    - by Noah Huppert
    I am doing some R&D for a game and am now tackling the topic of water. The goal Make water that can flow. Aka you can have an origin point that water shoots out from or a downhill slope. Make it so water splashes, so when an object hits the water there is a splash. Aka: Actual physics water sim. The current way I know how to do it I know how to create a shader that makes an object look like its water by making waves. Combined with that you can check to see if an object is colliding and apply an upwards force to simulate buoyancy. What is wrong with that way The water does not flow No splashes Possible solutions Have particles that are fairly large that interact with each other to simulate water Possible drawbacks Performance. Question: Is there a better way to do water or is using particles as described the only way?

    Read the article

  • Render graphics using Doubles in Graphics2D

    - by thedeadlybutter
    Currently, I have a JFrame for my game to render in, and I'm using Graphics2D for drawing (The games graphics are fairly simple 2D sprites). However, my delta variable is a double, and all of the Graphics 2D methods (And Grpahics) use int. I tried to type cast the delta to an int, but it just rounds down to 0. So my question is, how can I render graphics using Graphics2D in Java with coordinates that are doubles. Can I convert it to work with Graphics2D if there is no built in way? Or, is there a graphics library that can support doubles for coordinates?

    Read the article

  • Hidden Loading with UDK

    - by CyrusFiredawn
    I was wondering, how would I go about creating hidden loading scenes with UDK? For example, a character walks in to an elevator, the elevator fakes movement, whilst the previous floor is destroyed and the next floor is loaded on top. I assume it's possible with UDK, since it's supposedly rather flexible, but I've never used UDK before (I decided to ask this question first to save me learning it all, finding out it isn't possible, then giving up). So yeah, is hiding the loading process possible? And if so, how would I go about doing it?

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Biome Transition in a Grid & Borderless World

    - by API-Beast
    I have a universe: a list of "Systems", each with their own center, type and radius. A small part of such a universe could look like this: Systems: Can be very close to a different system, e.g. overlap Can be inside another, much bigger system Can be very far away from any other systems Spawn system specific entities and particles inside the system radius Have some properties like background color So far so good. However, the player can fly around freely, inside and outside of systems, in real time. How do I interpolate and determine things like the background color now, depending on camera position? E.g. if you are halfway between a green and a red system you should see a background halfway between red and green, or if you are inside a lilac system near the center and at the border of a green system you should get a mostly lilac background etc.

    Read the article

  • What is the XACT API?

    - by EddieV223
    I wanted to use DirectMusic in my game, but it's not in the June 2010 SDK, so I thought that I had to use DirectSound. Then I saw the XAudio2.h header in the SDK's include folder and found that XAudio2 is the replacement for DirectSound. Both are low-level. During my research I stumbled across the XACT API, but can't find a good explanation on it. Is XACT to XAudio2 what DirectMusic was to DirectSound? By which I mean, is the XACT API a high-level, easier-to-use API for playing sounds that abstracts away the details of XAudio2? If not, what is it?

    Read the article

  • How to work with scenes in a 2D game

    - by Anearion
    I'm a java/android programmer, but I don't have any experience in game programming, I'm already reading proper books, like "Pro Android Games", but my concerns are more about the ideas behind game programming than the techniques themselves. I'm working on a 2D game, something like Cluedo to let you understand the genre. I would like to know how should I act with the "scenes", for example, a room with a desk, TV, windows and a lamp. I need to make some items tappable and others not. Is it common to use one image (invisible to the user) with every different item a different color, then call the getColor() method on the image? Or use one image as background, and separate images for all the items? If the latter, how can I set the positioning? and should I use imageView or imageButton? I'm sorry if those are really low quality questions, but as "outsider" ( I'm 23 and still finishing my university ) it's pretty hard learn alone.

    Read the article

  • Calculate the intersection depth between a rectangle and a right triangle

    - by Celarix
    all. I'm working on a 2D platformer built in C#/XNA, and I'm having a lot of problems calculating the intersection depth between a standard rectangle (used for sprites) and a right triangle (used for sloping tiles). Ideally, the rectangle will collide with the solid edges of the triangle, and its bottom-center point will collide with the sloped edge. I've been fighting with this for a couple of days now, and I can't make sense of it. So far, the method detects intersections (somewhat), but it reports wildly wrong depths. How does one properly calculate the depth? Is there something I'm missing? Thanks!

    Read the article

  • Setting Krypton Light to Screen Pixels

    - by Adam Jerrett
    So a few days back, I started playing around with Krypton XNA for 2D lighting in my game. I noticed in general, that spawning a light at (0,0) with Krypton causes the light to appear in, pretty much, the centre of the game screen. Is there any way to change this so a Krypton light's "starting point" at [0,0] would spawn at the top left of the screen, and thus follow the standard screen co-ordinates for position? I ask because currently I'm busy working on my game where my spawn point is [512,512]. With hard code, the closest I've got to the light being "central" to this point is the vector position [12,-20], which makes no sense and is impossible to craft, mathematically, if I want the light to move with the camera (the position [480,512] maps roughly to [10,-20]). So, is there any way to "normalise" the krypton lights to use standard screen co-ordinates? If you guys can, play around with the demo from the site and please see if you can find anything out about it. Documentation on the engine is rather scarce, so it's difficult to find anything relevant to my "pixel-perfect" need. It might just also be something in the code with regards to the matrices that I'm not fully understanding. Any help would be useful. Thanks.

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • How to make the Angry Birds "shot arch" dotted line? [duplicate]

    - by unexpected62
    This question already has an answer here: Show path of a body of where it should go after linear impulse is applied 2 answers I am making a game that includes 2D projectile flight paths like that of Angry Birds. Angry Birds employs the notion that a previous shot is shown with a dotted line "arch" showing the player where that last shot went. I think recording that data is simple enough once a shot is fired, but in my game, I want to show it preemptively, ie: before the shot. How would I go about calculating this dotted line? The other caveat is I have wind in my game. How can you determine a projectile preemptively when wind will affect it too? This seems like a pretty tough problem. My wind right now just applies a constant force every step of animation in the direction of the wind flow. I'm using Box2D and AndEngine if it matters.

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • How do I create a curved line or filled circle or generally a circle using C++/SDL?

    - by NoobScratcher
    Hello I've been trying for ages to make a pixel circle using the putpixel function provided by SDL main website here is that function : void putpixel(int x,int y , int color , SDL_Surface* surface) { unsigned int *ptr = static_cast <unsigned int *> (surface->pixels); int offset = y * (surface->pitch/sizeof(unsigned int)); ptr[offset + x] = color; } and my question is how do I curve a line or create an circle arc of pixels or any other curved shape then a rectangle or singular pixel or line. for example here are some pictures filled pixel circle below enter link description here now my idea was too change the x and y value of the pixel position using + and - to create the curves but in practice didn't provide the correct results what my results are in this is to be able to create a circle that is made out of pixels only nothing else. thank you for anyone who takes the time to read this question thanks! :D

    Read the article

  • How do client-server cooperation based games like Diablo 3 work?

    - by edgar
    Diablo 3 cooperates with Blizzard servers even during single player games. In fact, Blizzard has had problems with the games "melting their servers." I would like to ask: How do the client and the server communicate? What details does the client leave to the server, and vice versa? What details are redundant - both the client and the server know - and how often do they disagree? The previous paragraph contains the important questions, but I have a few more that I must explain my motivation towards. I am interested in the programming of botting. Ethical botting - I don't plan on actually abusing the automation to run 24/7. I just find it to be a great programming challenge to glean information from a game, and then make decisions from that information. I am stuck in the starting gate. The unofficial questions from this post would be: How can I make a bot (language, tools, libraries)? Can I get information through the communication between client and server, rather than the brute force pixel detection easily used in more static games? There probably is a trust issue, and to that all I can say is that I promise not to abuse the answers. But please feel free to answer any of the questions you feel comfortable with. Thank you!

    Read the article

  • Camera movement and threshold not working

    - by irish guy mcconagheh
    I have a platformer that is in progress, part of this has a camera which I only want to move when the character moves out of a certain threshold, to try to accomplish this I have the following if statement: if(((Mathf.Abs(target.transform.position.x))-(Mathf.Abs(transform.position.x)))>thres){ x = moveTo(transform.position.x, target.position.x, trackSpeed); } in unity/c#. In pseudocode it means if((absolute value of player x) - (absolute value of camera x) is greater than the threshold){ move { however this does not seem to work correctly. it appears to work for the first couple of times the threshold is reached, however the distance between the camera and the player has to increase every time for the camera to move. I do not believe the movement of the camera is the problem, however the code for it is as follows: private float moveTo(float n, float target, float accel) { if (n == target) { return n; } else { float dir = Mathf.Sign(target - n); n += accel * Time.deltaTime * dir; return (dir == Mathf.Sign(target-n))? n: target; } } }

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >