Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 442/1071 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • MMORPG design for time-limited players

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • How can I make an infinite cave using stage3d?

    - by ifree
    I want to make an infinite cave in my 3d game using flash stage3d. But I got no idea about how to build that cave. Can anyone can give me some solution or hint? update: I've tried agal fragment shader like squeae tunnel in shader toy code: var fragmentProgramCode:String = AGALUtils.build() .mov("ft0","v0") .div("ft1","ft0.xy","fc3.xy") .mul("ft2","fc6.x","ft1") .sub("ft3","ft2","fc5.x")//vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy; .mul("ft1","ft3.x","ft3.x") .mul("ft2","ft3.y","ft3.y") .pow("ft4","ft1","fc6.z")//float r = pow( pow(p.x*p.x,16.0) + pow(p.y*p.y,16.0), 1.0/32.0 ); .pow("ft5","ft2","fc6.z") .add("ft1","ft4","ft5") .pow("ft4","ft1","fc6.w") .mov("ft5","fc5")//uv .sub("ft1","fc7.x","ft4") .add("ft5.x","fc7.x","ft1")//uv.x = .5*time + 0.5/r; .mov("ft6","fc0")//for atan .atan2("ft5.y","ft3.y","ft3.x",new <String>["fc7.y","fc5.x","fc7.z","fc7.w","fc8.x","fc8.y","fc8.z","fc8.w","fc9.x","fc9.y"],"ft6") .tex("ft0","ft5","fs0","repeat","linear","nomip")//tex .mul("ft1","ft4","ft4") .mul("ft2","ft1","ft4")//r*r*r .mul("ft1","ft0.xyz","ft2") .mov("ft0.w","fc5.x") .mov("oc","ft1").toString() it can only apply one material,but my project requires different types of material (like floor,ceilling). so ,I create a 3d model Is there anyway to make that 3d model render like "infinity cave"? use agal to make each side of cave's texture move? thanks for your help

    Read the article

  • Good 2D Platformer Physics

    - by Joe Wreschnig
    I have a basic character controller set up for a 2D platformer with Box2D, and I'm starting to tweak it to try to make it feel good. Physics engines have a lot of knobs to tweak, and it's not clear to me, writing with a physics engine for the first time, which ones I should use. Should jumping apply a force for several ticks? An impulse? Directly set velocity? How do I stop the avatar from sticking to walls without taking away all its friction (or do I take away all the friction, but only in the air)? Should I model the character as a capsule? A box with rounded corners? A box with two wheels? Just one big wheel? I feel like someone must have done this before! There seem to be very few resources available on the web that are not "baby's first physics", which all cut off where I'm hoping someone has already solved the issues. Most examples of physics engines for platformers have floaty-feeling controls, or in-air jumps, or easily exploitable behavior when temporary penetration is too high, etc. Some examples of what I mean: A short tap of jump jumps a short distance; a long tap jumps higher. Short skidding when stopping or reversing directions at high velocity. Standing stably on inclines (but maybe sliding down them when ducking). Analog speed when using an analog controller. All the other things that separate good platformers from bad platformers. Dare I suggest, stable moving platforms? I'm not really looking for "hey, do this." Obviously, the right thing to do is dependent on what I want in the game. But I'm hoping someone somewhere has gone through the possibilities and said "well technique A does feature X well, technique B does Y well, but that doesn't work with C", or has some worked examples beyond "if (key == space) character.impulse(0, 1)"

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • Mac mini 2012 graphic upgrade for UE4 Unity3D Blender

    - by DaCrAn
    I have a mac mini (late 2012) i7, 16gb ram Vengeance graphic card intel HD4000. I buy recently a thunderbolt expansion PCIE whit support a graphic card PCIE 2.0 16x whit space for Full leght card. I have dubts about what graphic card gona give me the best results for using the Unreal Engine 4 UE4 or Unity3D, and Blender. My badget cover a Nvidia Quadro K4000 3gb or ATI Firepro W7000 4gb. Any recomendation? What professional graphic card can be better for design games in 3D? Thanks. DaCrAn

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • SFML - Moving a sprite on mouseclick

    - by Mike
    I want to be able to move a sprite from a current location to another based upon where the user clicks in the window. This is the code that I have: #include <SFML/Graphics.hpp> int main() { // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600), "SFML window"); // Load a sprite to display sf::Texture Image; if (!Image.LoadFromFile("cb.bmp")) return EXIT_FAILURE; sf::Sprite Sprite(Image); // Define the spead of the sprite float spriteSpeed = 200.f; // Start the game loop while (App.IsOpened()) { if (sf::Keyboard::IsKeyPressed(sf::Keyboard::Escape)) App.Close(); if (sf::Mouse::IsButtonPressed(sf::Mouse::Right)) { Sprite.SetPosition(sf::Mouse::GetPosition(App).x, sf::Mouse::GetPosition(App).y); } // Clear screen App.Clear(); // Draw the sprite App.Draw(Sprite); // Update the window App.Display(); } return EXIT_SUCCESS; } But instead of just setting the position I want to use Sprite.Move() and gradually move the sprite from one position to another. The question is how? Later I plan on adding a node system into each map so I can use Dijkstra's algorithm, but I'll still need this for moving between nodes.

    Read the article

  • Html 5 ping pong game side collision problem

    - by Gurjit
    I am making a simple ping pong game where I am facing a side collision problem means when the ball collides with the either side of the paddle . Although I have written code for making it works but something is failing....I want plz someone to give suggestions and tell how to avoid it. Means while trying to hit the ball with side face of the paddle poses a problem.!! Here is the main part of the code causing problem function checkCollision(){ ///// This is collision detection for the upper part ///// if( cy + radius >= paddleTop && cx + radius > paddleLeft && cy + radius >= paddleTop + 5 && cx - radius <= paddleLeft + paddleWidth ) { dy = -dy; ++hits; /// On collision we are increasing the Score playSound(); } else if( cy + radius >= paddleTop && cy + radius <= paddleTop + paddleHeight && cx + radius >= paddleLeft && cy - radius <= paddleLeft - (radius + 1) ) { dx = -dx; } } here is working fiddle for it :- http://jsfiddle.net/gurjitmehta/orzpzf69/

    Read the article

  • Power Distribution amongst connected nodes

    - by Perky
    In my game the map is represented by connected nodes, each node has a number of connected nodes. The nodes represent a system in which players can build structures and move units about. If you're familiar with Sins of a Solar Empire the game map is very similar. I want each node to be able to produce power and share it with all connected nodes. For example if A, B, C & D are all connected and produce 100 power units, then each system should have 400 power units available. If node B builds a structure that consumes 100 power units then A, B, C & D should then have 300 power units available. I've been working on this system all day and haven't been able to get it working quite the way I want. My current implementation is to first recurse through each nodes's connected node adding up the power, I keep a list of closed nodes so it doesn't loop, it's quite similar to A* actually. Pseudo code: All nodes start with the properties node.power = 0 node.basePower = 100 // could be different for each node. node.initialPower = node.basePower - function propagatePower( node, initialPower, closedNodes ) node.power += initialPower add( closedNodes, node ) connectedNodes = connected_nodes_except_from( closedNodes ) foreach node in connectedNodes do propagatePower( node, initialPower, closedNodes ) end end After this I iterate through all power consumers. foreach consumer in consumers do node = consumer.parentNode if node.power >= consumer.powerConsumption then consumer.powerConsumed += consumer.powerConsumption node.producedPower -= consumer.powerConsumption end end Then I adjust the initial power for the next propagation cycle. foreach node in nodes do node.initialPower = node.basePower - node.producedPower node.displayPower = node.power // for rendering the power. node.power = 0 end This seemed to work at first but then I came into a problem. Say two nodes A & B produce 100Pu each, it's shared so both A & B have 200Pu. I then make two structures that consume 80Pu each on A (160Pu). Then the nodes power is adjusted to basePower - producedPower (100-160 = -60). Nodes are propagated, both nodes now have 40Pu (A: -60 + B: 100 = 40). Which is correct because they started with 200Pu - 160Pu = 40Pu. However now node.power >= consumer.powerConsumption is false. Whats worse is it's false for any structure that uses more that 40Pu, so the whole system goes down. I could deduct from consumer.powerConsumption but what do I do if power is reduced elsewhere? I don't have the correct data to perform the necessary checks. It's late so I'm probably not thinking straight but I thought to ask on here to see if anyone has any other implementations, better or worse I'd be interested to know.

    Read the article

  • (libgdx) Button doesn't work

    - by StercoreCode
    At the game I choose StopScreen. At this screen displays button. But if I click it - it doesn't work. What I expect - when I press button it must restart game. At this stage must display at least a message that the button is pressed. I tried to create new and clear project. Main class implement ApplicationListener. I put the same code in the appropriate methods. And it's works! But if i create this button in my game - it doesn't work. When i play and go to the StopScreen, i saw button. But if i click, or touch, nothing happens. I think that the proplem at the InputListener, although i set the stage as InputProcessor. Gdx.input.setInputProcessor(stage); I also try to addListener for Button as ClickListener. But it gave no results. Or it maybe problem that i implements Screen method - not ApplicationListener or Game. But if StopScreen implement ApplicationListener, at the mainGame I can't to setScreen. Just interests question: why button displays but nothing happens to it? Here is the code of StopScreen if it helps find my mistake: public class StopScreen implements Screen{ private OrthographicCamera camera; private SpriteBatch batch; public Stage stage; //** stage holds the Button **// private BitmapFont font; //** same as that used in Tut 7 **// private TextureAtlas buttonsAtlas; //** image of buttons **// private Skin buttonSkin; //** images are used as skins of the button **// public TextButton button; //** the button - the only actor in program **// public StopScreen(CurrusGame currusGame) { camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); batch = new SpriteBatch(); buttonsAtlas = new TextureAtlas("button.pack"); //** button atlas image **// buttonSkin = new Skin(); buttonSkin.addRegions(buttonsAtlas); //** skins for on and off **// font = AssetLoader.font; //** font **// stage = new Stage(); stage.clear(); Gdx.input.setInputProcessor(stage); TextButton.TextButtonStyle style = new TextButton.TextButtonStyle(); style.up = buttonSkin.getDrawable("ButtonOff"); style.down = buttonSkin.getDrawable("ButtonOn"); style.font = font; button = new TextButton("PRESS ME", style); //** Button text and style **// button.setPosition(100, 100); //** Button location **// button.setHeight(100); //** Button Height **// button.setWidth(100); //** Button Width **// button.addListener(new InputListener() { public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Pressed"); return true; } public void touchUp(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Released"); } }); stage.addActor(button); } @Override public void render(float delta) { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(); batch.setProjectionMatrix(camera.combined); batch.begin(); stage.draw(); batch.end(); }

    Read the article

  • Lock mouse in center of screen, and still use to move camera Unity

    - by Flotolk
    I am making a program from 1st person point of view. I would like the camera to be moved using the mouse, preferably using simple code, like from XNA var center = this.Window.ClientBounds; MouseState newState = Mouse.GetState(); if (Keyboard.GetState().IsKeyUp(Keys.Escape)) { Mouse.SetPosition((int)center.X, (int)center.Y); camera.Rotation -= (newState.X - center.X) * 0.005f; camera.UpDown += (newState.Y - center.Y) * 0.005f; } Is there any code that lets me do this in Unity, since Unity does not support XNA, I need a new library to use, and a new way to collect this input. this is also a little tougher, since I want one object to go up and down based on if you move it the mouse up and down, and another object to be the one turning left and right. I am also very concerned about clamping the mouse to the center of the screen, since you will be selecting items, and it is easiest to have a simple cross-hairs in the center of the screen for this purpose. Here is the code I am using to move right now: using UnityEngine; using System.Collections; [AddComponentMenu("Camera-Control/Mouse Look")] public class MouseLook : MonoBehaviour { public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 } public RotationAxes axes = RotationAxes.MouseXAndY; public float sensitivityX = 15F; public float sensitivityY = 15F; public float minimumX = -360F; public float maximumX = 360F; public float minimumY = -60F; public float maximumY = 60F; float rotationY = 0F; void Update () { if (axes == RotationAxes.MouseXAndY) { float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX; rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0); } else if (axes == RotationAxes.MouseX) { transform.Rotate(0, Input.GetAxis("Mouse X") * sensitivityX, 0); } else { rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0); } while (Input.GetKeyDown(KeyCode.Space) == true) { Screen.lockCursor = true; } } void Start () { // Make the rigid body not change rotation if (GetComponent<Rigidbody>()) GetComponent<Rigidbody>().freezeRotation = true; } } This code does everything except lock the mouse to the center of the screen. Screen.lockCursor = true; does not work though, since then the camera no longer moves, and the cursor does not allow you to click anything else either.

    Read the article

  • How to get warnings when compiling fx files

    - by jdv-Jan de Vaan
    When I compile DirectX shaders (.fx files), I dont see any compiler warnings unless there was an error in the effect. This happens both when using the offline FXC compiler, as well as calling SlimDx's CompileEffect (which is what we normally do). I could force warnings as errors (/WX), but if you enable that, you get an error that compilation failed, without the warning that caused the problem. So how can I output warnings for shaders that compile properly?

    Read the article

  • How to have operations with character/items in binary with concrete operations?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • Importing FBX with multiple meshes in UDK

    - by andresp
    I need to import into UDK a several amount of FBX models (representing buildings) which are composed by various submeshes (walls, windows, roof...). I need to keep the individual meshes (can't use the merge option) but I also need to work with the building as a whole. Do you know if this is possible? How? Also, is there a way to keep the textures assignment for the FBX models after importing them to Unreal? Doing the process manually (importing model, importing texture, assign to the material, assign the material to each mesh and submesh) for 100 or 200 models (to import an entire city from City Engine), isn't viable.

    Read the article

  • Finding direction of travel in a world with wrapped edges

    - by crazy
    I need to find the shortest distance direction from one point in my 2D world to another point where the edges are wrapped (like asteroids etc). I know how to find the shortest distance but am struggling to find which direction it's in. The shortest distance is given by: int rows = MapY; int cols = MapX; int d1 = abs(S.Y - T.Y); int d2 = abs(S.X - T.X); int dr = min(d1, rows-d1); int dc = min(d2, cols-d2); double dist = sqrt((double)(dr*dr + dc*dc)); Example of the world : : T : :--------------:--------- : : : S : : : : : : T : : : :--------------: In the diagram the edges are shown with : and -. I've shown a wrapped repeat of the world at the top right too. I want to find the direction in degrees from S to T. So the shortest distance is to the top right repeat of T. but how do I calculate the direction in degreed from S to the repeated T in the top right? I know the positions of both S and T but I suppose I need to find the position of the repeated T however there more than 1. The worlds coordinates system starts at 0,0 at the top left and 0 degrees for the direction could start at West. It seems like this shouldn’t be too hard but I haven’t been able to work out a solution. I hope somone can help? Any websites would be appreciated.

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >