Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 528/962 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • Lock mouse in center of screen, and still use to move camera Unity

    - by Flotolk
    I am making a program from 1st person point of view. I would like the camera to be moved using the mouse, preferably using simple code, like from XNA var center = this.Window.ClientBounds; MouseState newState = Mouse.GetState(); if (Keyboard.GetState().IsKeyUp(Keys.Escape)) { Mouse.SetPosition((int)center.X, (int)center.Y); camera.Rotation -= (newState.X - center.X) * 0.005f; camera.UpDown += (newState.Y - center.Y) * 0.005f; } Is there any code that lets me do this in Unity, since Unity does not support XNA, I need a new library to use, and a new way to collect this input. this is also a little tougher, since I want one object to go up and down based on if you move it the mouse up and down, and another object to be the one turning left and right. I am also very concerned about clamping the mouse to the center of the screen, since you will be selecting items, and it is easiest to have a simple cross-hairs in the center of the screen for this purpose. Here is the code I am using to move right now: using UnityEngine; using System.Collections; [AddComponentMenu("Camera-Control/Mouse Look")] public class MouseLook : MonoBehaviour { public enum RotationAxes { MouseXAndY = 0, MouseX = 1, MouseY = 2 } public RotationAxes axes = RotationAxes.MouseXAndY; public float sensitivityX = 15F; public float sensitivityY = 15F; public float minimumX = -360F; public float maximumX = 360F; public float minimumY = -60F; public float maximumY = 60F; float rotationY = 0F; void Update () { if (axes == RotationAxes.MouseXAndY) { float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX; rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0); } else if (axes == RotationAxes.MouseX) { transform.Rotate(0, Input.GetAxis("Mouse X") * sensitivityX, 0); } else { rotationY += Input.GetAxis("Mouse Y") * sensitivityY; rotationY = Mathf.Clamp (rotationY, minimumY, maximumY); transform.localEulerAngles = new Vector3(-rotationY, transform.localEulerAngles.y, 0); } while (Input.GetKeyDown(KeyCode.Space) == true) { Screen.lockCursor = true; } } void Start () { // Make the rigid body not change rotation if (GetComponent<Rigidbody>()) GetComponent<Rigidbody>().freezeRotation = true; } } This code does everything except lock the mouse to the center of the screen. Screen.lockCursor = true; does not work though, since then the camera no longer moves, and the cursor does not allow you to click anything else either.

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Role of an entity state in a component based system?

    - by Paul
    Component-based entity systems are all the rage these days; everyone seems to agree they are the way to go, but no one really has a definitive implementation of such a system. I was wondering, what role do entity states (walking-left, standing, jumping, etc) have in a CBS? Do they act like controllers (i.e. they handle events and change the entity's attributes based on those events)? What about cases where a state would, for example, require that the entity enters no-clip mode? Should, that state, when it enters, maybe set the CollisionComponent of the entity to a null pointer or something? (Then, on exit, the state should restore the entity's CollisionComponent to its previous state.) Also, I guess it's the current state's job to change the entity's state to something else, right?

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

  • Trying to figure out SDL pixel manipulation?

    - by NoobScratcher
    Hello so I've found code that plots a pixel in an SDL Screen Surface : void putpixels(int x, int y, int color) { unsigned int *ptr = (unsigned int*)Screen->pixels; int lineoffset = y * (Screen->pitch / 4 ); ptr[lineoffset + x ] = color; } But I have no idea what its actually doing here this is my thoughts. You make an unsigned integer to hold the unsigned int version of pixels then you make another integer to hold the line offset and it equals to multiply by pitch which is then divided by 4 ... Now why am I dividing it by 4 and what is the pitch and why do I multiply it?? Why must I change the lineoffset and add it to the x value then equal it to colors? I'm soo confused.. ;/ I found this function here - http://sol.gfxile.net/gp/ch02.html

    Read the article

  • Finding closest object to a location within a specific perpendicular distance to direction vector

    - by Sniper
    I have a location and a direction vector indicating facing, I want to find the closest object to that location that is within some tolerance distance (perpendicular distance) to the ray formed by the location and direction vector. Basically I want to get the object that is being aimed at. I have thought about finding all objects within a box and then finding the closest object to my vector from them results, but I am sure that there is a more efficient way. The Z axis is optional, the objects are most likely within a few meters of the search vector.

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • Rotating multiple points at once in 2D

    - by Deukalion
    I currently have an editor that creates shapes out of (X, Y) coordinates and then triangulate that to make up a shape of those points. What will I have to do to rotate all of those points simultaneously? Say I click the screen in my editor, it locates the point where I've clicked and if I move the mouse up or down from that point it calculates rotation on X and Y axis depending on new position relevant to first position, say I move up 10 on the Y axis it rotates that way and the same way for X. Or simply, somehow to enter rotation degree: 90, 180, 270, 360, for example. I use VertexPositionColor at the moment. What are the best algorithms or methods that I can look at to rotate multiple points in 2D at once? Also: Since this is an editor I do now want to rotate it on the Matrix, so if I want to rotate the whole shape 180 degree that's the new "position" of all the points, so that's the new rotation = 0 for example. Later on I probably will use World Matrix rotation for this, but not now.

    Read the article

  • Everything "invisible" when launching map from launcher

    - by Predanoob
    Excuse my noobiness, but I downloaded the SDK, and I tried the map Forest from within the editor and it worked fine. However if I launch it from the Launcher using the console it looks like this: http://i.stack.imgur.com/U7rPU.jpg I can use the weapons(although they are invisible), and interact with objects despite not seeing them. I also did my own map same problem. What am I doing wrong? ?(

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • Who should map physical keys to abstract keys?

    - by Paul Manta
    How do you bridge the gap between the library's low-level event system and your engine's high-level event system? (I'm not necessarily talking about key events, but also about quit events.) At the top level of my event system, I send out KeyPressedEvents, KeyRelesedEvents and others of this kind. These high-level events only contain the abstract values of the keys (they don't say that Space way pressed, but that the JumpKey was pressed, for example). Whose responsibility should it be to map the "JumpKey" to an actual key on the keyboard?

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • Level and Player objects - which should contain which?

    - by Thane Brimhall
    I've been working on a several simple games, and I've always come to a decision point where I have to choose whether to have the Level object as an attribute of the Player class or the Player as an attribute of the Level class. I can see arguments for both: The Level should contain the player because it also contains every other entity. In fact it just makes sense this way: "John is in the room." It makes it a bit more difficult to move the player to a new level, however, because then each level has to pass its player object to an upcoming level. On the other hand, it makes programming sense to me to leave the player as the top-level object that is persistent between levels, and the environment changes because the player decides to change his level and location. It becomes very easy to change levels, because all I have to do is replace the level variable on the player. What's the most common practice here? Or better yet, is there a "right" way to architecture this relationship?

    Read the article

  • Rotations and Origins

    - by Theodore Enderby
    I was hoping someone could explain to me, or help me understand, the math behind rotations and origins. I'm working on a little top down space sim and I can rotate my ship just how I want it. Now, when I get my blasters going it'd be nice if they shared the same rotation. Here's a picture. and here's some code! blast.X = ship.X+5; blast.Y = ship.Y; blast.RotationAngle = ship.RotationAngle; blast.Origin = new Vector2(ship.Origin.X,ship.Origin.Y); I add five so the sprite adds up when facing right. I tried adding five to the blast origin but no go. Any help is much appreciated

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Floodfill algorithm for GO

    - by user1048606
    The floodfill algorithm is used in the bucket tool in MS paint and photoshop, but it can also be used for GO and minesweeper. http://en.wikipedia.org/wiki/Flood_fill In go you can capture groups of stones, this website portrays it with two stones. http://www.connectedglobe.com/mindy/cap6.html This is my floodfill method in Java, it is not capturing a group of stones and I have no idea why because to me it makes sense. public void floodfill(int turn, int col, int row){ for(int a = col; a<19; a++){ for(int b = row; b<19; b++){ if(turn == black){ if(stones[col][row] == white){ stones[col][row] = 0; floodfill(black, col-1, row); floodfill(black, col+1, row); floodfill(black, col, row-1); floodfill(black, col, row+1); } } } } } It searches up, down, left, right for all the stones on the board. If the stones are white it captures them by making them 0, which represents empty.

    Read the article

  • Stop a rotating object at a specified angle?

    - by Krummelz
    I'm working in JavaScript with HTML5 and the canvas. I have an object which is rotating at a certain speed, and I need the object's rotation to slow down gradually and the front of the object to stop at a specified angle. (I'm using radians, not degrees.) I have a variable to keep track of the angle which the object is facing, as it rotates. How would I go about getting the object to come to rest, facing the direction I want it to?

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Playing part of a sfx audio file in HTML5 using WebAudio

    - by Matthew James Davis
    I have compiled all of my sound effects into one sequenced .ogg file. I have the start and stop times for each sound effect. How do I play the individual effects? That is, how do I play part of an audio file. More specificially, I've created a dictionary { 'sword_hit': { src: 'sfx.ogg', start: 265, // ms length: 212 // ms } } that my play_sound() function can use to look up 'sword_hit' and play the correct audio file at the correct start time for the correct duration. I simply need to know how to tell the WebAudio API to start playing at start ms and only play for length ms.

    Read the article

  • Error loading PCX image in FreeImage library

    - by khanhhh89
    I'm using FreeImage in C++ for loading texuture from the PCX image. My FreeImage code is as following: FREE_IMAGE_FORMAT fif = FIF_UNKNOWN; //pointer to the image data BYTE* bits(0); fif = FreeImage_GetFileType(m_fileName.c_str(), 0); if (FreeImage_FIFSupportsReading(fif)) dib = FreeImage_Load(fif, m_fileName.c_str()); //retrieve the image data bits = FreeImage_GetBits(dib); //get the image width and height width = FreeImage_GetWidth(dib); height = FreeImage_GetHeight(dib); My problem is the width and height variable are both 512, while the bits array is an empty string, which make the following OPENGL call corrupt: glTexImage2D(m_textureTarget, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, bits); While debugging, I notice that the "fif" variable (which contains the format of the image) is JPEG, while the Image is actually PCX. I wonder whether or not the FreeImage recognize the wrong format (from PCX to JPEG), so tha the bits array is an empty string. I hope to see your explanation about this problem. Thanks so much

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >