Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 611/1414 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • How do I deal with the problems of a fast side-scroller?

    - by Ska
    I'm making a side scrolling airplane game and when I begin going very fast I begin to experience some problems as a player: Elements are not distinguishable, like power-ups from bullets, etc I start to feel dizzy and uncomfortable There isn't enough time to see what's coming How can I sort this out? Do I use less details in all the grahpics? Tiny Wings has the same horizontal movement speed as in my game but it doesn't suffer from these problems. Are there any other really fast side-scrollers I could take as a reference?

    Read the article

  • How does a single programmer make a game?

    - by Mike
    I have always been a software developer, but lately I've been wanting to get into games. The only thing stopping me is the fact that I'm a programmer, not an artist. I've made some simple stuff, Tetris, 2D chess things like that but I can't do much art and that's really what holds me back. Now the problem is, I've yet to go to college so most commercial projects wouldn't accept me even to work for free and learn a bit especially with my lack of experience in games and any indie projects I've looked into really have an issue with responding to people interested, or actually completing (or starting really, most don't get past the ideas on paper) the project they want to do. I've looked around locally for artists, anyone who can do modeling, textures or animating or even anyone with some ability to make some more advanced 2D assets to get something like a side-scrolling RPG or something but haven't been able to find anyone. So how do you guys do it? Do I really just have to wait until I can go to college to see if I like working with games or is there some way I can get art (for free, anything I do is just going to be for fun so I don't want to have to sink money into it) and just start messing around on my own? Or am I just having bad luck and not looking in the right places for other people interested in having me help? I'm not looking for anything in particular, just something to fill some time with and see if I like making games. If not, well I'll go back to my software projects. I just have one more year of highschool and I'd like to try a few different areas before I go to college.

    Read the article

  • What's the best way to add some particle or laser effects to an already animated character?

    - by Scott
    I just purchased some rigged and animated robot characters from 3drt for a game I'm making in unity. I would like to be able to add some weapon effects to the characters. For example, I would like for the robots to be able to shot lasers out of the hands at enemies. I have know idea where to even start with this task as I'm more of a programmer than a graphics guy. Can some experienced developers / designers please point me in a good direction? Thanks. Note: As of right now I have maya and blender installed on my computer.

    Read the article

  • 2D Ragdoll - should it collide with itself?

    - by Axarydax
    Hi, I'm working on a ragdoll fighting game as a hobby project, but I have one dilemma. I am not sure if my ragdoll should collide with itself or not, i.e. if ragdoll's body parts should collide. 2D world is somewhat different than 3D, because there are several layers of stuff implied (for example in Super Mario you jump through a platform above you while going up). The setup I'm currently most satisfied with is when only the parts which are joined by a joint don't collide, so head doesn't collide with neck, neck with chest, chest with upper arm etc, but the head can collide with chest, arms, legs. I've tried every different way, but I'm not content with either. Which way would recommend me to go?

    Read the article

  • Understanding the Microsoft Permissive License

    - by cable729
    I want to use certain parts of the Game State Management Example in a game I'm making, but I'm not sure how to do this legally. It says in the license that I'm supposed to include a copy of the license with it. So if I make a Visual Studio Solution, I just add the license.txt to the solution? Also, if I use a class and change it, do I have to keep the license info at the top or add that I changed it or what?

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • Collision Systems Implementation

    - by hrr4
    Just curious what might be a good way to implement a decent collision system. As a class inherited by a base Entity class? Currently I'm stuck and could just use a couple better ideas than my own. Any help is appreciated! Edit: Sorry, it's 2D Collisioning but honestly, I'm not looking for specific collision methods. I'm looking more about the lines of implementation. Just curious of some of the common methods of how to implement collision systems such as: Should the entire collision system be it's own class? What, if anything, should be inheritable? These are some of my questions. Sorry for the confusion.

    Read the article

  • Box2D `ApplyLinearImpulse` is not working whereas `SetLinearVelocity` works

    - by Narek
    I need to mimic jumping behavior for the player in my game. Player consists of two fixtures with circle and rectangle shapes. Rectangle I use to detect ground and it is a sensor. Is some point for jumping I do this: float impulseY = body->GetMass() * PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180); body->ApplyLinearImpulse(b2Vec2(0, impulseY), body->GetWorldCenter(), true); and player does not jump. But when I do this: body->SetLinearVelocity(b2Vec2(0, PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180))); my player jumps. Also when I change the rectangle shape to be normal (not sensor) shape, its works again. Why? Just in case here are the parameters of my rectangular sensor: b2PolygonShape boxShape; boxShape.SetAsBox(width * 0.5/2/PTM_RATIO, height * 0.2/2/PTM_RATIO, b2Vec2(0, -height * 0.4 /PTM_RATIO), 0); b2FixtureDef boxFixtureDef; boxFixtureDef.friction = 0; boxFixtureDef.restitution = 0; boxFixtureDef.density = 1; boxFixtureDef.isSensor = true; boxFixtureDef.userData = static_cast<void*>(PLAYER_GROUP);

    Read the article

  • Transparent JPanel, Canvas background in JFrame

    - by Andy Tyurin
    I wanna make canvas background and add some elements on top of it. For this goal I made JPanel as transparent container with setOpaque(false) and added it as first of JFrame container, then I added canvas with black background (in future I wanna set animation) to JFrame as second element. But I can't undestand why i see grey background, not a black. Any suggestions? public class Game extends JFrame { public Container container; //Game container with components public Canvas backgroundLayer; //Background layer of a game public JPanel elementsLayer; //elements panel (top of backgroundLayer), holds different elements private Dimension startGameDimension = new Dimension(800,600); //start game dimension public Game() { //init main window super("Astra LaserForces"); setSize(startGameDimension); setBackground(Color.CYAN); container=getContentPane(); container.setLayout(null); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //init jpanel elements layer elementsLayer=new JPanel(); elementsLayer.setSize(startGameDimension); elementsLayer.setBackground(Color.BLUE); elementsLayer.setOpaque(false); container.add(elementsLayer); //init canvas background layer backgroundLayer = new Canvas(); backgroundLayer.setSize(startGameDimension); backgroundLayer.setBackground(Color.BLACK); //set default black color container.add(backgroundLayer); } //start game public void start() { setVisible(true); } //create new instance of game and start it public static void main(String[] args) { new Game().start(); } }

    Read the article

  • How can I obtain in-game data from Warcraft 3 from an external process?

    - by Slav
    I am implementing a behavior algorithm and would like to test it with my lovely Warcraft III game to watch how it will fight against real players. The problem I'm having is that I don't know how to obtain information about in-game state (units, structures, environment, etc.) from the running WC3 game. My algorithm needs access to the hard drive and possibly distributed computing, that's why JASS (WC3's editor language) isn't appropriate; I need to run my algorithm from a separate process. Direct3D hooking is an approach, but it wasn't done for WC3 yet and a significant drawback of that approach would be the inability to watch how the AI performs online, since it uses the viewport to issue commands. How I read in-game data from WC3 in a different process in a fastest and easiest way?

    Read the article

  • Need the co-ordinates of innerPolygon

    - by user960567
    Let say I have this diagram, Given that i have all the co-ordinates of outer polygon and the distance between inner and outer polygon is d is also given. How to calculate the inner polygon co-ordinates? Edit: I was able to solved the issue by getting the mid-points of all lines. From these mid-points I can move d distance, So I can get three points. No I have 3 points and 3 slopes. From this, I can get three new equations. Simultaneously, solving the equation get the 3 points.

    Read the article

  • Handling game logic events by behavior components

    - by chehob
    My question continues on topic discussed here I have tried implementing attribute/behavior design and here is a quick example demonstrating the issue. class HealthAttribute : public ActorAttribute { public: HealthAttribute( float val ) : mValue( val ) { } float Get( void ) const { return mValue; } void Set( float val ) { mValue = val; } private: float mValue; }; class HealthBehavior : public ActorBehavior { public: HealthBehavior( shared_ptr< HealthAttribute > health ) : pHealth( health ) { // Set OnDamage() to listen for game logic event "DamageEvent" } void OnDamage( IEventDataPtr pEventData ) { // Check DamageEvent target entity // ( compare my entity ID with event's target entity ID ) // If not my entity, do nothing // Else, modify health attribute with received DamageEvent data } protected: shared_ptr< HealthAttribute > pHealth; }; My question - is it possible to get rid of this annoying check for game logic events? In the current implementation when some entity must receive damage, game logic just fires off event that contains damage value and the entity id which should receive that damage. And all HealthBehaviors are subscribed to the DamageEvent type, which leads to any entity possesing HealthBehavior call OnDamage() even if he is not the addressee.

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • movement of sprites with kinect and xna

    - by pablopp83
    im working on a proyect with kinect sdk and xna 4.0. i need take the position of the hands and draw a sprite over it. im doing it directly and, because of that, i get a "trembling hands" effect. so, i was thinking on make the sprite move from the previous position to the new one, given in every frame by the new hand position. this way, the sprite does not jump from one position to another. this is working just fine, but im using a constant value for the velocity, and i really would like to use a variable velocity given by the difference between the prev and the new position. this is, if the hand move more quickly in the reality, the velocity will be higher. I really dont have a clue on how to make this works. can somebody point me in the right direction? thanks.

    Read the article

  • Lerping to a center point while in motion

    - by Fibericon
    I have an enemy that initially flies in a circular motion, while facing away from the center point. This is how I achieve that: position.Y = (float)(Math.Cos(timeAlive * MathHelper.PiOver4) * radius + origin.Y); position.X = (float)(Math.Sin(timeAlive * MathHelper.PiOver4) * radius + origin.X); if (timeAlive < 5) { angle = (float)Math.Atan((0 - position.X) / (0 - position.Y)); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); } That part works just fine. After five seconds of this, I want the enemy to turn inward, facing the center point. However, I've been trying to lerp to that point, since I don't want it to simply jump to the new rotation. Here's my code for trying to do that: else { float newAngle = -1 * (float)Math.Atan((0 - position.X) / (0 - position.Y)); angle = MathHelper.Lerp(angle, newAngle, (float)gameTime.ElapsedGameTime.Milliseconds / 1000); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); } That doesn't work so fine. It seems like it's going to at first, but then it just sort of skips around. How can I achieve what I want here?

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Isometric tile selection

    - by Dylan Lundy
    I'm not all that good with Maths. I'm trying to make a function to convert mouse coordinates into a particular tile in an isometric view. All of the algorithms I have seen so far work with the X & Y axes going diagonal, my game is currently set up like this, and I would like to keep it so. Is there an algorithm so that if the mouse was at the red dot, it would return the coordinates of the tile that it is sitting on? (6,2)

    Read the article

  • XNA 2D Top Down game - FOREACH didn't work for checking Enemy and Switch-Tile

    - by aldroid16
    Here is the gameplay. There is three condition. The player step on a Switch-Tile and it became false. 1) When the Enemy step on it (trapped) AND the player step on it too, the Enemy will be destroyed. 2) But when the Enemy step on it AND the player DIDN'T step on it too, the Enemy will be escaped. 3) If the Switch-Tile condition is true then nothing happened. The effect is activated when the Switch tile is false (player step on the Switch-Tile). Because there are a lot of Enemy and a lot of Switch-Tile, I have to use foreach loop. The problem is after the Enemy is ESCAPED (case 2) and step on another Switch-Tile again, nothing happened to the enemy! I didn't know what's wrong. The effect should be the same, but the Enemy pass the Switch tile like nothing happened (They should be trapped) Can someone tell me what's wrong? Here is the code : public static void switchUpdate(GameTime gameTime) { foreach (SwitchTile switch in switchTiles) { foreach (Enemy enemy in EnemyManager.Enemies) { if (switch.Active == false) { if (!enemy.Destroyed) { if (switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius)) { enemy.EnemySpeed = 10; //reducing Enemy Speed if it enemy is step on the Tile (for about two seconds) enemy.Trapped = true; float elapsed = (float)gameTime.ElapsedGameTime.Milliseconds; moveCounter += elapsed; if (moveCounter> minMoveTime) { //After two seconds, if the player didn't step on Switch-Tile. //The Enemy escaped and its speed back to normal enemy.EnemySpeed = 60f; enemy.Trapped = false; } } } } else if (switch.Active == true && enemy.Trapped == true && switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius) ) { //When the Player step on Switch-Tile and //there is an enemy too on this tile which was trapped = Destroy Enemy enemy.Destroyed = true; } } } }

    Read the article

  • iOS + cocos2d: how to account for sprite's position for the different device dimensions in an universal app?

    - by fuzzlog
    All the questions I've seen regarding iOS universal apps (with or without cocos2d) deal with the "how to add graphics to a universal app". My question is, how does the code need to be written to ensure that the sprites appear appropriately on the screen (given that an iPhone 5's resolution is not proportional to an iPad's resolution)? Is it just a bunch of "if" statements and duplicate code or do iOS/cocos2d provide common function calls that will place the sprites at an appropriate position?

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • How to synchronise the acceleration, velocity and position of the monsters on the server with the players?

    - by Nick
    I'm building an MMO using Node.js, and there are monsters roaming around. I can make them move around on the server using vector variables acceleration, velocity and position. acceleration = steeringForce / mass; velocity += acceleration * dTime; position += velocity * dTime; Right now I just send the positions over, and tell the players these are the "target positions" of the monsters, and let the monsters move towards the target positions on the client with a speed dependant on the distance of the target position. It works but looks rather strange. How do I synchronise these properly with the players without looking funny to them, taking into account the server lag? The problem is that I don't know how to make use of the correct acceleration/velocity values here; right now they just move directly in a straight line to the target position instead of accelerating/braking there properly. How can I implement such behaviour?

    Read the article

  • TGA loader: reverse height

    - by aVoX
    I wrote a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option "origin" set to "Top Left" (Note: Actually TGA files are meant to be stored upside down - "Bottom Left" in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGAs, so my question is, how do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >