Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 550/1071 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Rotate Rigged and Animated Scene?

    - by Nick
    I have a rigged and animated mesh that I need to import into Unity. We several characters that all use the same script, and access their bones to do procedural animations as well. The problem is that the new model I was given is facing the wrong way. Instead of facing forward, the model is facing the right.. Is there any way to rotate the model with it's animations without screwing it up, so that it will import properly in unity facing forward? Because of the way it was done, selecting everything in the scene and just rotating it by 90 degrees ruins some of the animations, so I need a program that can fix this.

    Read the article

  • Free Models and Related Animations for AI project in Unity [on hold]

    - by zhed
    Does anybody know a good website where to find free models and animations for AI projects? I'm not talking about anything good looking, like stuff you would look for when building a proper game, but, for example, a bunch of male/female models that are able to walk around and that would substitute my ugly "capsules", just to give a better -yet, still rough - idea of what's going on in the scene. On the Unity Asset Store there are a bunch of nice male/female models, but i haven't found any free general-purpose(i.e. normal walking) animation attachable to them. Any tip would we appreciated, thanks :)

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • Resume Button error

    - by user3178359
    i have two class. if i press button pause it can show button resume, retry,menu and the game time is paused. but when i press the resume the game time still paused. help me plase how to continue the game time ?? code for button pause : using UnityEngine; using System.Collections; public class pause : MonoBehaviour { public GUITexture showMenu; public GUITexture btnResume; public bool gamePaused = false; void OnMouseDown() { gamePaused = true; Time.timeScale = 0; showMenu.pixelInset = new Rect(220, 200, showMenu.pixelInset.width, showMenu.pixelInset.height); btnResume.pixelInset = new Rect(300, 300, btnResume.pixelInset.width, btnResume.pixelInset.height); code for button resume : using UnityEngine; using System.Collections; public class btResume : pause { //public GUITexture shoe; void onMouseDown() { base.gamePaused = false; Time.timeScale = 1; btnResume.pixelInset = new Rect(300, -300, btnResume.pixelInset.width, btnResume.pixelInset.height); showMenu.pixelInset = new Rect(220, -200, showMenu.pixelInset.width, showMenu.pixelInset.height); } }

    Read the article

  • Updating entities in response to collisions - should this be in the collision-detection class or in the entity-updater class?

    - by Prog
    In a game I'm working on, there's a class responsible for collision detection. It's method detectCollisions(List<Entity> entities) is called from the main gameloop. The code to update the entities (i.e. where the entities 'act': update their positions, invoke AI, etc) is in a different class, in the method updateEntities(List<Entity> entities). Also called from the gameloop, after the collision detection. When there's a collision between two entities, usually something needs to be done. For example, zero the velocity of both entities in the collision, or kill one of the entities. It would be easy to have this code in the CollisionDetector class. E.g. in psuedocode: for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Robot){ entityA.setVelocity(0,0); entityB.setVelocity(0,0); } if(entityA instanceof Missile || entityB instanceof Missile){ entityA.die(); entityB.die(); } } } } However, I'm not sure if updating the state of entities in response to collision should be the job of CollisionDetector. Maybe it should be the job of EntityUpdater, which runs after the collision detection in the gameloop. Is it okay to have the code responding to collisions in the collision detection system? Or should the collision detection class only detect collisions, report them to some other class and have that class affect the state of the entities?

    Read the article

  • What's the difference between a "Release" Xbox 360 build and a "Debug" one?

    - by Sebastian Gray
    I've got a build of my game that works on Windows under a release and debug build as expected. When I deploy the debug version of the game to the Xbox, it works as expected and runs the same as on Windows - however when I deploy the release version to the XBOX I get different behaviour within the game. I'm using a 3rd party library for the collisions (which is where I am seeing differences between the release and debug versions of my game); so I can't see what's actually different but I suspect they have some compiler directive for Debug on the Xbox to the Release version on the Xbox. As such, I'm thinking that I may need to release my game with the Debug build instead of the Release build but I want to know what issues I can expect by doing so? Are there any significant performance issues between the two build profiles?

    Read the article

  • Maya and Game Engines (i.e. Environment Testing)

    - by DiscreteGenius
    What does it mean if I'm designing an environment and I want to test it in the game engine, to see what its like to "run" [or fly] around my environment? I heard an instructor say that exact thing in a Maya training video and I'd like to know more about "How Game Engines and Maya are related to each other." He stated this would be done to see how things look in "size" (e.g. I assume he meant: 'How big is the cathedral, bridge, wall, building, etc.'). I've tried to research such information but it's too complicated, and detailed. I just want a simplistic response to my query. Thanks to everyone willing to help and not criticize my question.

    Read the article

  • Identifying connected lines drawn free-hand by a user

    - by rawrgoesthelion
    I have a series of 'images' described by a mixture of connected lines and curves. Users will draw on the screen, free hand, and my goal is to break their drawing down into a series of lines and curves that can be matched with the 'images' in my set. For the sake of simplicity, let's assume this is occurring on a touch screen. These lines will be connected. Each time the user's finger moves, the dx and dy is recorded. The drawing is considered complete and analyzed when the user's finger leaves the screen. I'm having trouble figuring out a good way to break the user's drawing down into lines. Is there any well known approach to this problem, a C++ library that solves it, or any good articles/technical papers on how to achieve this?

    Read the article

  • Why do we move the world instead of the camera

    - by sharethis
    I heard that in an OpenGL game what we do to let the player move is not to move the camera but to move the whole world around. For example here is an extract of this tutorial: http://open.gl/transformations In real life you're used to moving the camera to alter the view of a certain scene, in OpenGL it's the other way around. The camera in OpenGL cannot move and is defined to be located at (0,0,0) facing the negative Z direction. That means that instead of moving and rotating the camera, the world is moved and rotated around the camera to construct the appropriate view. Why do we do that?

    Read the article

  • 3D models overlapping each other

    - by Auren
    I have a problem at the moment when I draw some models to teach me more about 3D game programming. The models at the moment overlaps each other from some angles witch makes sense since the game at the moment draws from left to right, line after line. However my question is: Is there any easy escape from this issue or is there any way that you could draw the in-game world from the players position? I would really appreciate if someone could give me some answers on this.

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Help with timebased scoring algorithm

    - by Dave
    Im trying to devise an appropriate scoring system for my game. The game in essense has a finite number of tasks to complete (say 20) and the quicker you complete these task, the more points you get. I had devised a basic way of doing this using bands of time multiplied by a score for that band multiplied by the number of tasks solved within that time band i.e. (Time Band) = (Points) 1-5 sec = 15, 5-10 secs = 10, 10-20 secs = 5, 20-30 secs = 3, 40 secs onwards = 1, So for example if I did 3 tasks in the 1-5sec band i'd get 15*3=45points, if i found 10 in the 20-30sec band i'd get 3*10=30 points. Im sure there is a more mathematical way of doing this using powers of some kind but I just can't think how and hoping someone has already done something smilar.. Many thanks in advance

    Read the article

  • What is causing this behavior with the movement of Pong Ball in 2D? [closed]

    - by thegermanpole
    //edit after running it through the debugger it turned out i had the display function set to x,x...TIL how to use a debugger I've been trying to teach myself C++ SDL with the lazyfoo tutorial and seem to have run into a roadblock. The code below is the movement function of my Dot class, which controls the ball. The ball seems to ignore yvel and moves with xvel to the bottom right. The code should be pretty readable, the rest of the relevant facts are: All variables are names Constants are in caps dotrad is the radius of my dot yvel and xvel are set to 5 in the constructor The dot is created at x and y equal to 100 When I comment out the x movement block it doesn't move, but if i comment out the y movement block, it keeps on going down to the right. void Dot::move() { if(((y+yvel+dotrad) <= SCREEN_HEIGHT) && (0 <= (y-dotrad+yvel))) { y+=yvel; } else { yvel = -1*yvel; } if(((x+xvel+dotrad) <= SCREEN_WIDTH) && (0 <= (x-dotrad+xvel))) { x +=xvel; } else { xvel = -1*xvel; } }

    Read the article

  • How to pass an interface to Java from Unity code?

    - by nickbadal
    First, let me say that this is my first experience with Unity, so the answer may be right under my nose. I've also posted this question on Unity's answers site, but plugin questions don't seem to be as frequently answered there. I'm trying to create a plugin that allows me to access an SDK from my game. I can call SDK methods just fine using AndroidJavaObject and I can pass data to them with no issue. But there are some SDK methods that require an interface to be passed. For example, my Java function: public void attemptLogin(String username, String password, LoginListener listener); Where listener; is a callback interface. I would normally run this code from Java as such: attemptLogin("username", "password", new LoginListener() { @Override public void onSuccess() { //Yay! do some stuff in the game } @Override public void onFailure(int error) { //Uh oh, find out what happened based on error } }); Is there a way to pass a C# interface through JNI/Unity to my attemptLogin function? Or is there a way to create a mimic-ing interface in C# that I can call from inside the Java code (and pass in any kind of parameter)? Thanks in advance! :)

    Read the article

  • How to prioritize related game entity components?

    - by Paul Manta
    I want to make a game where you have to run over a bunch of zombies with your car. When moving around, the zombies have a few things to take into consideration: When there's no player around they might just roam about randomly. And even when some other component dictates a specific direction, they should wobble to the left and right randomly (like drunk people). This implies a small, random, deviation in their movement. They should avoid static obstacles. When they see they are headed towards a wall, they should reorient themselves. They should avoid the car. They should try to predict where the car will be based on its velocity and try to move out of the way. When they can, they should try to get near the player. All these types of decisions they have to do seem like they should be implemented in different components. But how should I manage them? How can I give different components different weights that reflect the importance of each decision (in a given situation)? I would need some other component that acts as a manager, but do you have any tips on how I should implement it? Or maybe there's a better solution?...

    Read the article

  • What libgdx project files can I ignore from version control?

    - by Zhen
    In an automatically created libgdx project, what files can I safely tell Git (or other revision control systems) to ignore? I'm considering these: *-android/.settings/ *-android/bin/ *-desktop/.settings/ *-desktop/bin/ *-html/.settings/ *-html/gwt-unitCache/ *-html/war/WEB-INF/classes/ *-html/war/WEB-INF/deploy/ *-html/war/assets/ *-html/war/ */.settings/ */bin/ Am I missing some? Is there a complete list somewhere?

    Read the article

  • Will C++ remain viable for game engines in somewhat distant future?

    - by samual
    C++11 has opened ways, which were only dreamt by the C++ programmers. It has been three years since I have been learning C++, and I am going well. Now I want to get into vedio games. Every core of the game code I saw, was monstourously writtern in C++. My question is - If I get into serious game engine dev, and perfecting it would take, maybe say 10 years, would we still be writing game engines in C++ ?(newer standard) Or, will John Carmack, write id tech 7 in c++? note - I am strictly talking about game engines.

    Read the article

  • Polygon is rotating too fast

    - by Manderin87
    I am going to be using a polygon collision detection method to test when objects collide. I am attempting to rotate a polygon to match the sprites rotation. However, the polygon is rotating too fast, much faster than the sprite is. I feel its a timing issue, but the sprite rotates like it is supposed to. Can anyone look at my code and tell me what could be causing this issue? public void rotate(float x0, float y0, double angle) { for(Point point : mPoints) { float x = (float) (x0 + (point.x - x0) * Math.cos(Utilities.toRadians(angle)) - (point.y - y0) * Math.sin(Utilities.toRadians(angle))); float y = (float) (y0 + (point.x - x0) * Math.sin(Utilities.toRadians(angle)) + (point.y - y0) * Math.cos(Utilities.toRadians(angle))); point.x = x; point.y = y; } } This algorithm works when done singly, but once I plug it into the update method the rotation is too fast. The Points used are: P1 608, 368 P2 640, 464 P3 672, 400 Origin x0 is: 640 400 The angle goes from 0 to 360 as the sprite rotates. When the codes executes the triangle looks like a star because its moving so fast. The rotation is done in the sprites update method. The rotation method just increases the sprites degree by .5 when it executes. public void update() { if(isActive()) { rotate(); mBounding.rotate(mPosition.x, mPosition.y, mDegree); } }

    Read the article

  • How to reduce the time it takes to load my web game? [closed]

    - by Danial
    I created a puzzle game with Unity and uploaded it to one server. This works fine, but I bought a new server and uploaded my game to it as well. There, the loading time is much longer. These are the servers: http://pinheadsinteractive.com/Mozzie/ (fast) http://operation-mozzie-free.com/ (slow) The Unity files are exactly the same from one server to the next. My client is dissatisfied with the new, slow loading time. So, how can I reduce the time my Unity game takes to load? Even in some cases they faced the problem that they could not load the game at all. For the the moment, I'm using an iframe on the new sever as a workaround, but the issue still remains unsolved.

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >