Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 349/774 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Changing State in PlayerControler from PlayerInput

    - by Jeremy Talus
    In my player input I wanna change the the "State" of my player controller but I have some trouble to do it my player input is declared like that : class ResistancePlayerInput extends PlayerInput within ResistancePlayerController config(ResistancePlayerInput); and in my playerControler I have that : class ResistancePlayerController extends GamePlayerController; var name PreviousState; DefaultProperties { CameraClass = class 'ResistanceCamera' //Telling the player controller to use your custom camera script InputClass = class'ResistanceGame.ResistancePlayerInput' DefaultFOV = 90.f //Telling the player controller what the default field of view (FOV) should be } simulated event PostBeginPlay() { Super.PostBeginPlay(); } auto state Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 200; `log("Player Walking"); } } state Running extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 350; `log("Player Running"); } } state Sprinting extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 800; `log("Player Sprinting"); } } I have tried to use PCOwner.GotoState(); and ResistancePlayerController(PCOwner).GotoState(); but won't work. I have also tried a simple GotoState, and nothing happen how can I call GotoState for the PC Class from my player input ?

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • Moving a body in a specific direction using XNA with Farseer Physics

    - by Code Assasssin
    I have a custom polygon attached to a body, which looks like this: What I am trying to accomplish is getting the body to move according to wherever the tip of the body is. So far this is what I've tried: if (ks.IsKeyDown(Keys.Up)) { body.ApplyForce(new Vector2(0, -20),body.GetLocalPoint(new Vector2(0,0))); } if (ks.IsKeyDown(Keys.Left)) { body.ApplyTorque(-500); } if (ks.IsKeyDown(Keys.Right)) { body.ApplyTorque(500); } The body rotates fine - but when I try making the body accelerate according to the tip of the body - assuming I have specified the tip correctly(I am pretty sure I haven't), it just spins around, as if I have applied Torque to it. Can anyone point me in the right direction of how to fix this problem?

    Read the article

  • Converting a GameObject method call from UnityScript to C#

    - by Crims0n_
    Here is the UnityScript implementation of the method i use to generate a randomly tiled background, the problem i'm having relates to how to translate the call to the newTile method in c#, so far i've had no luck fiddling... can anyone point me in the correct direction? Thanks #pragma strict import System.Collections.Generic; var mapSizeX : int; var mapSizeY : int; var xOffset : float; var yOffset : float; var tilePrefab : GameObject; var tilePrefab2 : GameObject; var tiles : List.<Transform> = new List.<Transform>(); function Start () { var i : int = 0; var xIndex : int = 0; var yIndex : int = 0; xOffset = 2.69; yOffset = -1.97; while(yIndex < mapSizeY){ xIndex = 0; while(xIndex < mapSizeX){ var z = Random.Range(0, 5); if (z > 2) { var newTile : GameObject = Instantiate (tilePrefab, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 2) { var newTile2 : GameObject = Instantiate (tilePrefab2, Vector3(xIndex*0.64 - (xOffset * (mapSizeX/10)), yIndex*-0.64 - (yOffset * (mapSizeY/10)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "Ztile_"+i; i++; xIndex++; } } yIndex++; } } C# Version [Fixed] using UnityEngine; using System.Collections; public class LevelGen : MonoBehaviour { public int mapSizeX; public int mapSizeY; public float xOffset; public float yOffset; public GameObject tilePrefab; public GameObject tilePrefab2; int i; public System.Collections.Generic.List<Transform> tiles = new System.Collections.Generic.List<Transform>(); // Use this for initialization void Start () { int i = 0; int xIndex = 0; int yIndex = 0; xOffset = 1.58f; yOffset = -1.156f; while (yIndex < mapSizeY) { xIndex = 0; while(xIndex < mapSizeX) { int z = Random.Range(0, 5); if (z > 5) { GameObject newTile = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile.transform); newTile.transform.parent = transform; newTile.transform.name = "tile_"+i; i++; xIndex++; } if (z < 5) { GameObject newTile2 = (GameObject)Instantiate(tilePrefab, new Vector3(xIndex*0.64f - (xOffset * (mapSizeX/10.0f)), yIndex*-0.64f - (yOffset * (mapSizeY/10.0f)), 0), Quaternion.identity); tiles.Add(newTile2.transform); newTile2.transform.parent = transform; newTile2.transform.name = "tile2_"+i; i++; xIndex++; } } yIndex++; } } // Update is called once per frame void Update () { } }

    Read the article

  • Importing FBX with multiple meshes in UDK

    - by andresp
    I need to import into UDK a several amount of FBX models (representing buildings) which are composed by various submeshes (walls, windows, roof...). I need to keep the individual meshes (can't use the merge option) but I also need to work with the building as a whole. Do you know if this is possible? How? Also, is there a way to keep the textures assignment for the FBX models after importing them to Unreal? Doing the process manually (importing model, importing texture, assign to the material, assign the material to each mesh and submesh) for 100 or 200 models (to import an entire city from City Engine), isn't viable.

    Read the article

  • Arbitrary projection matrix from 6 arbitrary frustum planes

    - by Doub
    A projection matrix represent a tranformation from the camera view space to the rendering system clip space. In other words, it defines the transormation between a 6-sided frustum to the clip cube. The glOrtho and glFrustum use only 6 parameter to define such a projection, but impose several constraints on the frustum that will get projected to the clip cube: the near and far planes are parallel, the left and right planes intersect on a vertical line, and the top and bottom planes intersect on a horizontal lines, both lines being parallel to the near and far planes. I'd like to lift these restrictions. So, from the definition of the 6 frustum side planes (in whatever representation you see fit), how can I compute a general projection matrix?

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • What happened to .fx files in D3D11?

    - by bobobobo
    It seems they completely ruined .fx file loading / parsing in D3D11. In D3D9, loading an entire effect file was D3DXCreateEffectFromFile( .. ), and you got a ID3DXEffect9, which had great methods like SetTechnique and BeginPass, making it easy to load and execute a shader with multiple techniques. Is this completely manual now in D3D11? The highest level functionality I can find is loading a SINGLE shader from an FX file using D3DX11CompileFromFile. Does anyone know if there's an easier way to load FX files and choose a technique? With the level of functionality provided in D3D11 now, it seems like you're better off just writing .hlsl files and forgetting about the whole idea of Techniques.

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • Changing Ogre3D terrain lighting in real time

    - by lezebulon
    I'm looking at the Ogre 3D library and I'm browsing through some examples / tutorials. My question is about terrain. There are a few examples showing how great the terrain system is, but I think that the global illumination and shadows of the terrain have to be pre-computed, which kinda makes it impossible to integrate this with a day / night cycle. Is there a way to change the terrain light sources in real time? If so it is possible to do it and keep a decent FPS?

    Read the article

  • getting bone base and tip positions from a transform matrix?

    - by ddos
    I need this for a Blender3d script, but you don't really need to know Blender to answer this. I need to get bone head and tip positions from a transform matrix read from a file. The position of base is the location part of the matrix, length of the bone (distance from base to tip) is the scale, position of the tip is calculated from the scale (distance from bone base) and rotation part of the matrix. So how to calculate these? bone.base([x,y,z]) # x,y,z - floats bone.tip([x,y,z])

    Read the article

  • Implementing top view physics using box2D

    - by humbleBee
    How can top view physics games be done in box2D? One idea I have is to set the linear velocity of an object manually or to alter the linear and angular damping as my object moves over different surfaces. For example if my object is over a wet surface it'll have less linear damping and if it is over rough surface it'll have more damping. And to see if my object has fallen over an edge I'll try to use an AABB and check if its still inside or manually see if object.x > boundary.x etc. Is there any better way?

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

  • How to manage my model

    - by Christophe Debove
    I have in my model, a list of Classes : Player, NonPlayerCharacter, Monster, Item, NonMovableItem etc With AndEngine I've a list of sprite for each piece of my model, How can I manage the relashionship between my model's classes and the graphical elements, what is the degree of abstaction recommended for my problem? One sprite for one Model or one Model for one Sprite or n for n for exemple If I do drag&drop have I to make abstraction of the Sprite Class, another exemple a map is a List of sprite or a list of element of my model?

    Read the article

  • 3D zooming technique to maintain the relative position of an object on screen

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the view of the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed. I projected the 3D pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly? If some of you want to explain the math in detail, be my guest, I can understand these things well.

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • How can I transform a Point2f with a matrix on Android?

    - by Vivendi
    I'm developing for Android and I'm using the android.renderscript.Matrix3f class to do some calculations. What I need to do now is to now is to do something like mat.tranform(pointIn, pointOut); So I need to transform a matrix by a given Point class. In awt I would simply do this: AffineTransform t = new AffineTransform(); Point2D.Float p = new Point2D.Float(); t.transform( p, p ); But in Android I now have this: Matrix3f t = new Matrix3f(); PointF p = new PointF(); // Now I need to tranform it somehow.. But the Matrix3f class in Android doesn't have a Matrix.transform(Point2D ptSrc, Point2D ptDst) method. So I guess I have to do the transformation manually. But I'm not really sure how that works. From what I've seen it's something like a translate and then a rotate? Could anyone please tell me how to do this in code?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • XNA texture stretching at extreme coordinates

    - by Shaun Hamman
    I was toying around with infinitely scrolling 2D textures using the XNA framework and came across a rather strange observation. Using the basic draw code: spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.PointWrap, null, null); spriteBatch.Draw(texture, Vector2.Zero, sourceRect, Color.White, 0.0f, Vector2.Zero, 2.0f, SpriteEffects.None, 1.0f); spriteBatch.End(); with a small 32x32 texture and a sourceRect defined as: sourceRect = new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height); I was able to scroll the texture across the window infinitely by changing the X and Y coordinates of the sourceRect. Playing with different coordinate locations, I noticed that if I made either of the coordinates too large, the texture no longer drew and was instead replaced by either a flat color or alternating bands of color. Tracing the coordinates back down, I found the following at around (0, -16,777,000): As you can see, the texture in the top half of the image is stretched vertically. My question is why is this occurring? Certainly I can do things like bind the x/y position to some low multiple of 32 to give the same effect without this occurring, so fixing it isn't an issue, but I'm curious about why this happens. My initial thought was perhaps it was overflowing the coordinate value or some such thing, but looking at a data type size chart, the next closest below is an unsigned short with a range of about 32,000, and above is an unsigned int with a range of around 2,000,000,000 so that isn't likely the cause.

    Read the article

  • XNA Transparency depending on drawing order?

    - by DarthRoman
    I am drawing two 3D objects, both of them can fade from opaque to transparent independently, and they can intersect between them (so you cannot say when one of them is before the other one). Look at the image for a better understanding (one of the object is a terrain and the other one an area): Now, if I apply transparency to both of them, and draw the terrain before the area, the terrain is not transparent respecting to the area, but the area is: And finally, if I draw the area before the terrain, then the area is not transparent respecting of the terrain: QUESTION: How can I make all the objects transparent to the rest of objects without depending on the drawing order?

    Read the article

  • OpenGL position from depth is wrong

    - by CoffeeandCode
    My engine is currently implemented using a deferred rendering technique, and today I decided to change it up a bit. First I was storing 5 textures as so: DEPTH24_STENCIL8 - Depth and stencil RGBA32F - Position RGBA10_A2 - Normals RGBA8 x 2 - Specular & Diffuse I decided to minimize it and reconstruct positions from the depth buffer. Trying to figure out what is wrong with my method currently has not been fun :/ Currently I get this: which changes whenever I move the camera... weird Vertex shader really simple #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment shader Where the fun (and not so fun) stuff happens #version 150 uniform sampler2D depth_tex; uniform sampler2D normal_tex; uniform sampler2D diffuse_tex; uniform sampler2D specular_tex; uniform mat4 inv_proj_mat; uniform vec2 nearz_farz; in vec2 uv_f; ... other uniforms and such ... layout(location = 3) out vec4 PostProcess; vec3 reconstruct_pos(){ float z = texture(depth_tex, uv_f).x; vec4 sPos = vec4(uv_f * 2.0 - 1.0, z, 1.0); sPos = inv_proj_mat * sPos; return (sPos.xyz / sPos.w); } void main(){ vec3 pos = reconstruct_pos(); vec3 normal = texture(normal_tex, uv_f).rgb; vec3 diffuse = texture(diffuse_tex, uv_f).rgb; vec4 specular = texture(specular_tex, uv_f); ... do lighting ... PostProcess = vec4(pos, 1.0); // Just for testing } Rendering code probably nothing wrong here, seeing as though it always worked before this->gbuffer->bind(); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Enable(gl::DEPTH_TEST); gl::Enable(gl::CULL_FACE); ... bind geometry shader and draw models and shiz ... gl::Disable(gl::DEPTH_TEST); gl::Disable(gl::CULL_FACE); gl::Enable(gl::BLEND); ... bind textures and lighting shaders shown above then draw each light ... gl::BindFramebuffer(gl::FRAMEBUFFER, 0); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Disable(gl::BLEND); ... bind screen shaders and draw quad with PostProcess texture ... Rinse_and_repeat(); // not actually a function ;) Why are my positions being output like they are?

    Read the article

  • Can't get sprite to rotate correctly?

    - by rphello101
    I'm attempting to play with graphics using Java/Slick 2d. I'm trying to get my sprite to rotate to wherever the mouse is on the screen and then move accordingly. I figured the best way to do this was to keep track of the angle the sprite is at since I have to multiply the cosine/sine of the angle by the move speed in order to get the sprite to go "forwards" even if it is, say, facing 45 degrees in quadrant 3. However, before I even worry about that, I'm having trouble even getting my sprite to rotate in the first place. Preliminary console tests showed that this code worked, but when applied to the sprite, it just kind twitches. Anyone know what's wrong? int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x; int pY = sprite.y; int tempY, tempX; double mAng, pAng = sprite.angle; double angRotate=0; if(mX!=pX){ tempY=pY-mY; tempX=mX-pX; mAng = Math.toDegrees(Math.atan2(Math.abs((tempY)),Math.abs((tempX)))); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=270; else mAng=90; } //Calculations if(mX<pX&&mY<pY){ //If in Q2 mAng = 180-mAng; } if(mX<pX&&mY>pY){ //If in Q3 mAng = 180+mAng; } if(mX>pX&&mY>pY){ //If in Q4 mAng = 360-mAng; } angRotate = mAng-pAng; sprite.angle = mAng; sprite.image.setRotation((float)angRotate);

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >