Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 487/1295 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • SceneManagers as systems in entity system or as a core class used by a system?

    - by Hatoru Hansou
    It seems entity systems are really popular here. Links posted by other users convinced me of the power of such system and I decided to try it. (Well, that and my original code getting messy) In my project, I originally had a SceneManager class that maintained needed logic and structures to organize the scene (QuadTree, 2D game). Before rendering I call selectRect() and pass the x,y of the camera and the width and height of the screen and then obtain a minimized list containing only visible entities ordered from back to front. Now with Systems, originally in my first attempt my Render system required to get added all entities it should handle. This may sound like the correct approach but I realized this was not efficient. Trying to optimize It I reused the SceneManager class internally in the Renderer system, but then I realized I needed methods such as selectRect() in others systems too (AI principally) and make the SceneManager accessible globally again. Currently I converted SceneManager to a system, and ended up with the following interface (only relevant methods): /// Base system interface class System { public: virtual void tick (double delta_time) = 0; // (methods to add and remove entities) }; typedef std::vector<Entity*> EntitiesVector; /// Specialized system interface to allow query the scene class SceneManager: public System { public: virtual EntitiesVector& cull () = 0; /// Sets the entity to be used as the camera and replaces previous ones. virtual void setCamera (Entity* entity) = 0; }; class SceneRenderer // Not a system { vitual void render (EntitiesVector& entities) = 0; }; Also I could not guess how to convert renderers to systems. My game separates logic updates from screen updates, my main class have a tick() method and a render() method that may not be called the same times. In my first attempt renderers were systems but they was saved in a separated manager, updated only in render() and not in tick() like all other systems. I realized that was silly and simply created a SceneRenderer interface and give up about converting them to systems, but that may be for another question. Then... something does not feel right, isn't it? If I understood correctly a system should not depend on another or even count with another system exposing an specific interface. Each system should care only about its entities, or nodes (as optimization, so they have direct references to relevant components without having to constantly call the component() or getComponent() method of the entity).

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • Marketing: Angry Birds - How it's done

    - by John
    Why do some apps, like Angry Birds, dominate the market while other cool/fun/addicting apps are never heard of? I'm trying to figure out the best marketing strategy, or best way to sell an app to mass market. Does anybody have any ideas or things they noticed about the marketing of major blockbuster apps, like Angry Birds, why they get so popular and stay at the top of charts. Thanks for any ideas, comments ...

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • Splitting Graph into distinct polygons in O(E) complexity

    - by Arthur Wulf White
    If you have seen my last question: trapped inside a Graph : Find paths along edges that do not cross any edges How do you split an entire graph into distinct shapes 'trapped' inside the graph(like the ones described in my last question) with good complexity? What I am doing now is iterating over all edges and then starting to traverse while always taking the rightmost turn. This does split the graph into distinct shapes. Then I eliminate all the excess shapes (that are repeats of previous shapes) and return the result. The complexity of this algorithm is O(E^2). I am wondering if I could do it in O(E) by removing edges I already traversed previously. My current implementation of that returns unexpected results.

    Read the article

  • what is the best way to use loops to detect events while the main loop is running?

    - by yao jiang
    I am making an "game" that has pathfinding using pygame. I am using Astar algo. I have a main loop which draws the whole map. In the loop I check for events. If user press "enter" or "space", random start and end are selected, then animation starts and it will try to get from start to end. My draw function is stupid as hell right now, it works as expected but I feel that I am doing it wrong. It'll draw everything to the end of the animation. I am also detecting events in there as well. What is a better way of implementing the draw function such that it will draw one "step" at a time while checking for events? animating = False; while loop: check events: if not animating: # space or enter press will choose random start/end coords if enter_pressed or space_pressed: start, end = choose_coords route = find_route(start, end) draw(start, end, grid, route) else: # left click == generate an event to block the path # right click == user can choose a new destination if left_mouse_click: gen_event() reroute() elif right_mouse_click: new_end = new_end() new_start = current_pos() route = find_route(new_start, new_end) draw(new_start, new_end, grid, route) # draw out the grid def draw(start, end, grid, route_coord): # draw the end coords color = red; pick_image(screen, color, width*end[1],height*end[0]); pygame.display.flip(); # then draw the rest of the route for i in range(len(route_coord)): # pausing because we want animation time.sleep(speed); # get the x/y coords x,y = route_coord[i]; event_on = False; if grid[x][y] == 2: color = green; elif grid[x][y] == 3: color = blue; for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: if event.button == 3: print "destination change detected, rerouting"; # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; grid[r][c] = 4; end = [r, c]; elif event.button == 1: print "user generated event"; pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # mark it as a block for now grid[r][c] = 1; event_on = True; if check_events([x,y]) or event_on: # there is an event # mark it as a block for now grid[y][x] = 1; pick_image(screen, event_x, width*y, height*x); pygame.display.flip(); # then find a new route new_start = route_coord[i-1]; marked_grid, route_coord = find_route(new_start, end, grid); draw(new_start, end, grid, route_coord); return; # just end draw here so it wont throw the "index out of range" error elif grid[x][y] == 4: color = red; pick_image(screen, color, width*y, height*x); pygame.display.flip(); # clear route coord list, otherwise itll just add more unwanted coords route_coord_list[:] = [];

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • Appropriate level of granularity for component-based architecture

    - by Jon Purdy
    I'm working on a game with a component-based architecture. An Entity owns a set of Component instances, each of which has a set of Slot instances with which to store, send, and receive values. Factory functions such as Player produce entities with the required components and slot connections. I'm trying to determine the best level of granularity for components. For example, right now Position, Velocity, and Acceleration are all separate components, connected in series. Velocity and Acceleration could easily be rewritten into a uniform Delta component, or Position, Velocity, and Acceleration could be combined alongside such components as Friction and Gravity into a monolithic Physics component. Should a component have the smallest responsibility possible (at the cost of lots of interconnectivity) or should related components be combined into monolithic ones (at the cost of flexibility)? I'm leaning toward the former, but I could use a second opinion.

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

  • Modern Shader Book?

    - by Michael Stum
    I'm interested in learning about Shaders: What are they, when/for what would I use them, and how to use them. (Specifically I'm interested in Water and Bloom effects, but I know close to 0 about Shaders, so I need a general introduction). I saw a lot of books that are a couple of years old, so I don't know if they still apply. I'm targeting XNA 4.0 at the moment (which I believe means HLSL Shaders for Shader Model 4.0), but anything that generally targets DirectX 11 and OpenGL 4 is helpful I guess.

    Read the article

  • Align tetrahedrons

    - by thedeadlybutter
    I'm currently generating tetrahedron meshes in Unity When a player clicks the side of a mesh, a new one spawns aligned with it, like this. I'm not sure how nor can I find any information on implementing a tetra hedron grid. I tried playing around with the vertices until I realized I need to adjust position & rotation. Any ideas? EDIT: To be clear, the second image was manually placed objects in the Unity Editor. I'm looking to make an algorithm that places the meshes correctly.

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Moving player in direciton camera is facing

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. My problem is that wherever camera is facing, players forward is always the same direction. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". My camera script so far: using UnityEngine; using System.Collections; public class PlayerScript : MonoBehaviour { public float RotateSpeed = 150, MoveSpeed = 50; float DeltaTime; void Update() { DeltaTime = Time.deltaTime; transform.Rotate(0, Input.GetAxis("LeftX") * RotateSpeed * DeltaTime, 0); transform.Translate(0, 0, -Input.GetAxis("LeftY") * MoveSpeed * DeltaTime); } } public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 170, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Embed IF text parser in another game?

    - by DragonFax
    Are there any existing interactive fiction text parsing engines that I can embed in another game or application? I'm looking to use something as a library. I can pass it the available objects and verbs from my own side. It will parse the sentences from the user and give me back some sort of structure/AST describing what the user asked for. Then my own code can then act upon that request. I don't need something SIRI level. The simple sentences and actions that current IF games support is fine. But I'm not looking to write a whole text/sentence parser myself. This isn't an If game and I can't write it entirely in an interactive-fiction language like inform 7. Unfortunatly, I can't seem to find any examples of anyone using the text parsing capabilities of these engines without writing the entire game in that engine's language.

    Read the article

  • Handling Players, enemies and attacks in HTML5

    - by Chris Morris
    I'm building a simple (currently) game with free roaming player and monsters on a map built by a 2D grid. I've been looking at the methods for implementing characters and enemies onto the screen and I've seen two seperate methods for doing this online. Drawing the player onto the screen canvas directly and refreshing the entire screen every FPS tick. Having a separate canvas to handle the player and moving the player canvas on top of the screen canvas via absolute positioning. I can see some pros and cons of both methods but what is generally the best method for doing this? I assume the second due to not having to drain resources by refreshing the map when the user is not moving, but the type of game will generally have constant movement.

    Read the article

  • Rubik's cube array rotation

    - by Ace
    I'm about to make a 3D Rubik's cube based game in Flash AS3 and Away3d. I don't really know how to manage the 2D arrays of the Rubik's cube. For example, how do I rotate the corresponding arrays if I rotate a side, or just rotate a middle part? In this stage I also don't know how to rotate those smaller cube parts all together if a side is rotating. First I was thinking of "groups" ( like in sketchup or 3ds max, blender), but that would be tricky, because the group components would change every time. So I was thinking of just rotating each individual piece along a global axis. However, I just know the Away3d functions to rotate the cube of his local X , Y or Z axis, but how to rotate in global axis? Does anyone know of a algorithm for doing these types of rotations?

    Read the article

  • Tiled/TMX C++ Library/Parser

    - by Ben
    Where can I find an easy to use and up to date C++ parser/library for the .tmx map format (used by the Tiled Map Editor) ? EDIT: David's comment, 'Unless you want to build your game around the format of the parser..', got me thinking... So I have downloaded pugixml, which is an easy to use xml-parser with very straightforward documentation. Together with the spec for the TMX Map Format, I think I'll give it a try myself. I'll probably compare with Cocos2d-x's CCTMXTiledMap at some point.

    Read the article

  • Tile map collision is not working properly

    - by Sigh-AniDe
    I am having problems setting collision between my sprite and the tiles. I have only done the code for colision for moving upwards but some places on the map it moves up and some places it doesn't. Here is what I have so far: Vector2 position; private static float scalingFactor = 32; static int tileWidth = 32; static int tileHeight = 32; int[ , ] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, }; // This is in turtle.update if ( keyboardState.IsKeyDown( Keys.Up ) ) { if ( position.Y > screenHeight / 4 ) { //// current position of the turtle on the tiles int mapX = ( int )( position.X / scalingFactor ); int mapY = ( int )( position.Y / scalingFactor ) - 1; if ( isMovable( mapX , mapY , map ) ) { position.Y = position.Y - scalingFactor; } } else { MoveUp(); } } private void MoveUp() { motion.Y = -1; } public bool isMovable( int mapX , int mapY , int[ , ] map ) { if ( mapX < 0 || mapX > 19 || mapY < 0 || mapY > 20 ) { return false; } int tile = map[ mapX , mapY ]; if ( tile == 0 ) { return false; } return true; } protected override void Update( GameTime gameTime ) { turtle.Update( screenHeight , scalingFactor , map ); base.Update( gameTime ); }

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • how to retain the animated position in opengl es 2.0

    - by Arun AC
    I am doing frame based animation for 300 frames in opengl es 2.0 I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames Then, the animated rectangle has to stay there for the next 100 frames. Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames. I am using simple linear interpolation to calculate the delta-animation value for each frame. Pseudo code: The below drawFrame() is executed for 300 times (300 frames) in a loop. float RectMVMatrix[4][4] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; // identity matrix int totalframes = 300; float translate-delta; // interpolated translation value for each frame float scale-delta; // interpolated scale value for each frame // The usual code for draw is: void drawFrame(int iCurrentFrame) { // mySetIdentity(RectMVMatrix); // comment this line to retain the animated position. mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame(). Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };// 2 pixels translation and 1.01 units scaling after first frame This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also. Eventually output is getting wrong. How to retain the animated position without affecting the current ModelView matrix? =========================================== I got the solution... Thanks to Sergio. I created separate matrices for translation and scaling. e.g.CurrentTranslateMatrix[4][4], CurrentScaleMatrix[4][4]. Then for every frame, I reset 'CurrentTranslateMatrix' to identity and call mytranslate( CurrentTranslateMatrix, translate-delta, X_AXIS) function. I reset 'CurrentScaleMatrix' to identity and call myscale(CurrentScaleMatrix, scale-delta) function. Then, I multiplied these 'CurrentTranslateMatrix' and 'CurrentScaleMatrix' to get the final 'RectMVMatrix' Matrix for the frame. Pseudo Code: float RectMVMatrix[4][4] = {0}; float CurrentTranslateMatrix[4][4] = {0}; float CurrentScaleMatrix[4][4] = {0}; int iTotalFrames = 300; int iAnimationFrames = 100; int iTranslate_X = 200.0f; // in pixels float fScale_X = 2.0f; float scaleDelta; float translateDelta_X; void DrawRect(int iTotalFrames) { mySetIdentity(RectMVMatrix); for (int i = 0; i< iTotalFrames; i++) { DrawFrame(int iCurrentFrame); } } void getInterpolatedValue(int iStartFrame, int iEndFrame, int iTotalFrame, int iCurrentFrame, float *scaleDelta, float *translateDelta_X) { float fDelta = float ( (iCurrentFrame - iStartFrame) / (iEndFrame - iStartFrame)) float fStartX = 0.0f; float fEndX = ConvertPixelsToOpenGLUnit(iTranslate_X); *translateDelta_X = fStartX + fDelta * (fEndX - fStartX); float fStartScaleX = 1.0f; float fEndScaleX = fScale_X; *scaleDelta = fStartScaleX + fDelta * (fEndScaleX - fStartScaleX); } void DrawFrame(int iCurrentFrame) { getInterpolatedValue(0, iAnimationFrames, iTotalFrames, iCurrentFrame, &scaleDelta, &translateDelta_X) mySetIdentity(CurrentTranslateMatrix); myTranslate(RectMVMatrix, translateDelta_X, X_AXIS); // to translate the mv matrix in x axis by translate-delta value mySetIdentity(CurrentScaleMatrix); myScale(RectMVMatrix, scaleDelta); // to scale the mv matrix by scale-delta value myMultiplyMatrix(RectMVMatrix, CurrentTranslateMatrix, CurrentScaleMatrix);// RectMVMatrix = CurrentTranslateMatrix*CurrentScaleMatrix; ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } I maintained this 'RectMVMatrix' value, if there is no animation for the current frame (e.g. 101th frame onwards). Thanks, Arun AC

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >