Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 32/84 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How do I create a simple video editing filter for DirectShow or DMO?

    - by Ole Jak
    How do I create a simple video editing filter for DirectShow or DMO? What I need is simple - a tutorial or tutorials on how to create simple filter (like a brightness/contrast adjustment filter or any other pixel-per-pixel kind of filter) for filtering Direct Show Video astream (so I want to have a graph like "my Web Kamera" - "My photoshop like filter" - "rendering (or saving to file)".

    Read the article

  • image magick ,text arc

    - by manu
    system(' convert -size 320x100 xc:lightblue -font Courier -pointsize 72 \ -fill navy -annotate +25+65 \'Ernakulam1\' \ -virtual-pixel transparent -distort arc 120 \ -bordercolor lightblue font_arcnew.jpg'); This coe is not working Example arc the text mainly this code is not working -virtual-pixel transparent -distort arc 120

    Read the article

  • GDI+: Set all pixels to given color

    - by Charles
    What is the best way to set the RGB components of every pixel in a System.Drawing.Bitmap to a single, solid color? If possible, I'd like to avoid manually looping through each pixel to do this. Note: I want to keep the same alpha component from the original bitmap. I only want to change the RGB values. I looked into using a ColorMatrix or ColorMap, but I couldn't find any way to set all pixels to a specific given color with either approach.

    Read the article

  • How do I create a simple video effect filter for DirectShow or DMO?

    - by Ole Jak
    How do I create a simple video effect filter for DirectShow or DMO? What I need is simple - a tutorial or tutorials on how to create simple filter (like a brightness/contrast adjustment filter or any other pixel-per-pixel kind of filter) for filtering Direct Show Video astream (so I want to have a graph like "my Web Kamera" - "My photoshop like filter" - "rendering (or saving to file)".

    Read the article

  • cookie not getting tracked why?

    - by user158721
    hi, on one domain i use command as:: setcookie( "cookiename", "cookievalue", expires after two days, "/", "domain1.com" ); on other domain i used a pixel code as that url not able to read cookie , but the same url able to read cookie when it is called directly. when i put htat url as a pixel code on other domain . it is not able to read value. what might be the problem for this ?? Best Regards, Satish Kalepu

    Read the article

  • jQuery: snapped scrolling - possible?

    - by Fuxi
    hi all, i'm having a scrollable table with fixed header. would it be possible to have "snapped scrolling" on the scrollbar - which means that the table rows won't scroll pixel by pixel but snap responding to its row height, for better viewing. thx

    Read the article

  • Center image over background color taken from corner

    - by joebert
    I'm looking for a way that I can take say, a 200x200 pixel image and center it over a background that's 500x500 pixels. The background should be the color of the top-left corner of the 200x200 pixel image. I get the -gravity and -fill flags, but I'm having trouble finding a way to grab that top-left corners color to pass to the -fill flag.

    Read the article

  • matplotlib equivalent for MATLABs truesize()

    - by Lucas
    I am new to matplotlib and python and would like to display an image so that 1 pixel of the image is actually represented by 1 pixel in the figure. In MATLAB, this is achieved with the command truesize(). How can I do this in Python? I tried playing around with the imshow() arguments as well as set_dpi() and set_figwidth()/set_figheight(), but with no luck. Thanks.

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • Get all Vector2 Points Within Radius

    - by CheapDevotion
    I am looking for a formula that will give me all of the Vector2 Points within a certain radius given the center. Essentially what I am trying to do is change the color of each pixel in a 256 x 256 texture that is within a certain radius from a specific pixel (Using the Unity3d Game Engine). Programming Language doesn't really matter, as I can probably convert it to something I can use.

    Read the article

  • Implementing fog of war in opengl es 2.0 game

    - by joxnas
    Hi game development community, this is my first question here! ;) I'm developing a tactics/strategy real time android game and I've been wondering for some time what's the best way to implement an efficient and somewhat nice looking fog of war to incorporate in it. My experience with OpenGL or Android is not vast by any means, but I think it is sufficient for what I'm asking here. So far I have thought in some solutions: Draw white circles to a dark background, corresponding to the units visibility, then render to a texture, and then drawing a quad with that texture with blend mode set to multiply. Will this approach be efficient? Will it take too much memory? (I don't know how to render to texture and then use the texture. Is it too messy?) Have a grid object with a vertex shader which has an array of uniforms having the coordinates of all units, and another array which has their visibility range. The number of units will very probably never be bigger then 100. The vertex shader needs to test for each considered vertex, if there is some unit which can see it. In order to do this it, will have to loop the array with the coordinates and do some calculations based on distance. The efficiency of this is inversely proportional to the looks of it. A more dense grid will result in a more beautiful fog of war... but will require a greater amount of vertexes to be checked. Is it possible to find a nice compromise or is this a bad solution from the start? Which solution is the best? Are there better alternatives? Which ones? Thank you for your time.

    Read the article

  • OpenGL ES 2 shaders for drawing buildings and roads like Google Maps does

    - by Pris
    I'm trying to create a shader that'll give me an effect similar to what buildings and roads look like on 3D Google Maps. You can see the effect interactively if you enable WebGL at maps.google.com, and I also found a couple of screenshots that illustrate what I'm trying to achieve: Thing I noticed: There's some kind of transparency thing going on with the roads/ground and the buildings, but not between the buildings themselves. It might be that they're rendering the ground and roads after the buildings with the right blend functions to achieve that effect. If you look closely, you'll see parts of the building profiles have an outline. The roads also have nice clean outlines. There are a lot of techniques for outlining things with shaders... but I'm curious to find out what might have been used in this case considering mobile hardware and a large number of entities with outlines (roads and buildings) I'm assuming that for the lighting, some sort of simple diffuse per-vertex shader is being used for the buildings though I could be wrong. I'm especially curious about the 'look' they achieved with buildings (clean, precise outlines/shading). It reminds me a little of what you'd see when designing stuff with CAD applications like SolidWorks: I'd appreciate any advice on achieving this kind of look with ES 2 shaders.

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Cocos2d: Tongue effect like in Munch Time

    - by Joey Green
    I'm wanting to do a tongue effect for my character like the one in Munch Time( shown in pic ). The player does some action and his tongue attaches to the nearest platform. I'm thinking this is simply a get distance to platform and keep player at that distance as he moves back and forth giving him the swinging effect. For the drawing, I'm wanting the same effect where the tongue sprite is the skinniest in the middle of the distance between the character and platform. I know how to do this in a shader( I'm using cocos2d v2 btw ), but I'm wondering if there is some built-in functionality to allow me to do this. First, is this the right approach using distance? Second, is their an easy way to do the tongue sprite effect without a shader? Third, I'm wanting to have the player spring up at will in the direction of the platform. I'm using Box2D. Would there be a way to do this using force's or would it be easier to write my own code?

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • How best to handle ID3D11InputLayout in rendering code?

    - by JohnB
    I'm looking for an elegant way to handle input layouts in my directx11 code. The problem I have that I have an Effect class and a Element class. The effect class encapsulates shaders and similar settings, and the Element class contains something that can be drawn (3d model, lanscape etc) My drawing code sets the device shaders etc using the effect specified and then calls the draw function of the Element to draw the actual geometry contained in it. The problem is this - I need to create an D3D11InputLayout somewhere. This really belongs in the Element class as it's no business of the rest of the system how that element chooses to represent it's vertex layout. But in order to create the object the API requires the vertex shader bytecode for the vertex shader that will be used to draw the object. In directx9 it was easy, there was no dependency so my element could contain it's own input layout structures and set them without the effect being involved. But the Element shouldn't really have to know anything about the effect that it's being drawn with, that's just render settings, and the Element is there to provide geometry. So I don't really know where to store and how to select the InputLayout for each draw call. I mean, I've made something work but it seems very ugly. This makes me thing I've either missed something obvious, or else my design of having all the render settings in an Effect, the Geometry in an Element, and a 3rd party that draws it all is just flawed. Just wondering how anyone else handles their input layouts in directx11 in a elegant way?

    Read the article

  • Billboarding restricted to an axis (cylindrical)

    - by user8709
    I have succesfully created a GLSL shader for a billboarding effect. I want to tweak this to restrict the billboarding to an arbitrary axis, i.e. a billboarded quad only rotates itself about the y-axis. I use the y-axis as an example, but essentially I would like this to be an arbitrary axis. Can anyone show me how to modify my existing shader below, or if I need to start from scratch, point me towards some resources that could be helpful? precision mediump float; uniform mat4 u_modelViewProjectionMat; uniform mat4 u_modelMat; uniform mat4 u_viewTransposeMat; uniform vec3 u_axis; // <------------ !!! the arbitrary axis to restrict rotation around attribute vec3 a_position0; attribute vec2 a_texCoord0; varying vec2 v_texCoord0; void main() { vec3 pos = (a_position0.x * u_viewTransposeMat[0] + a_position0.y * u_viewTransposeMat[1]).xyz; vec4 position = vec4(pos, 1.0); v_texCoord0 = a_texCoord0; gl_Position = u_modelViewProjectionMat * position; }

    Read the article

  • Stage3D: Camera pans the whole screen

    - by Thomas Versteeg
    I am trying to create a 2D Stage3D game where you can move the camera around the level in an RTS style. I thought about using Orthographic Matrix3D functions for this but when I try to scroll the whole "stage" also scrolls. This is the Camera code: public function Camera2D(width:int, height:int, zoom:Number = 1) { resize(width, height); _zoom = zoom; } public function resize(width:Number, height:Number):void { _width = width; _height = height; _projectionMatrix = makeMatrix(0, width, 0, height); _recalculate = true; } protected function makeMatrix(left:Number, right:Number, top:Number, bottom:Number, zNear:Number = 0, zFar:Number = 1):Matrix3D { return new Matrix3D(Vector.<Number>([ 2 / (right - left), 0, 0, 0, 0, 2 / (top - bottom), 0, 0, 0, 0, 1 / (zFar - zNear), 0, 0, 0, zNear / (zNear - zFar), 1 ])); } public function get viewMatrix():Matrix3D { if (_recalculate) { _recalculate = false; _viewMatrix.identity(); _viewMatrix.appendTranslation( -_width / 2 - _x, -_height / 2 - y, 0); _viewMatrix.appendScale(_zoom, _zoom, 1); _renderMatrix.identity(); _renderMatrix.append(_viewMatrix); _renderMatrix.append(_projectionMatrix); } return _renderMatrix; } And the camera is send directly to the GPU with: c3d.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, cameraMatrix, true); And these are the shaders: ------Vertex Shader------ m44 op, va0, vc0 mov v0, va1.xy mov v0.z, va0.z ------Fragment Shader------ tex ft0, v0, fs0 <2d,linear,nomip> mov oc, ft1 Here is a example and here are two screenshots to show what I mean: How do I only let the inside of the stage3D scroll and not the whole stage?

    Read the article

  • Tessellating to a curve?

    - by Avi
    I'm creating a game engine, and I'm trying to define a 3D model format I want to use. I haven't come across a format that quite does what I want. My game engine assumes a shader model 5+ environment. By the time I'm finished with it, that won't be a very unreasonable requirement. Because it assumes such a modern environment, I'm going to try and exploit tessellation. The most popular way, it seems, to procedurally increase geometry through tessellation is to tessellate to a height map. This works for a lot of things, but has limitations in that height maps still use up VRAM and also only have finite scalability. So I want to be able to use curves to define what a mesh should tessellate to. The thing is, I have no idea what definition of curves I should use, how I should store it, and how I should tessellate to it. Do I use NURBS curves? Bezier? Hermite? And once I figure that out, is there an algorithm to determine how the tessellation shader should produce and move vertices to match the curve as closely as possible? Is the infinite scalability and lower memory usage when compared to height maps worth the added computational complexity? I'm sorry I'm kind if ignorant as to these matters. I just don't know where to start.

    Read the article

  • Is there a way to display navmesh agent path in Unity?

    - by Antoine Guillien
    I'm currently making a prototype for a game I plan to develop. As far as I did, I managed to set up the navigation mesh and my navmeshagents. I would like to display the path they are following when setDestination() is fired. I did some researches but didn't find anything about it. EDIT 1 : So I instantiate an empty object with a LineRenderer and I have a line bewteen my agent and the destination. Still I've not all the points when the path has to avoid an obstacle. Furthermore, I wonder if the agent.path does reflect the real path that the agent take as I noticed that it actually follow a "smoothier" path. Here is the code so far : GameObject container = new GameObject(); container.transform.parent = agent.gameObject.transform; LineRenderer ligne = container.AddComponent<LineRenderer>(); ligne.SetColors(Color.white,Color.white); ligne.SetWidth(0.1f,0.1f); //Get def material ligne.gameObject.renderer.material.color = Color.white; ligne.gameObject.renderer.material.shader = Shader.Find("Sprites/Default"); ligne.gameObject.AddComponent<LineScript>(); ligne.SetVertexCount(agent.path.corners.Length+1); int i = 0; foreach(Vector3 v in p.corners) { ligne.SetPosition(i,v); //Debug.Log("position agent"+g.transform.position); //Debug.Log("position corner = "+v); i++; } ligne.SetPosition(p.corners.Length,agent.destination); ligne.gameObject.tag = "ligne"; So How can I get the real coordinates my agent is going to walk throught ?

    Read the article

  • Why do the order of uniforms gets changed by the compiler?

    - by Aybe
    I have the following shader, everything works fine when setting the value of one of the matrices but I've discovered that getting a value back is incorrect for View and Projection, they are in reverse order. #version 430 precision highp float; layout (location = 0) uniform mat4 Model; layout (location = 1) uniform mat4 View; layout (location = 2) uniform mat4 Projection; layout (location = 0) in vec3 in_position; layout (location = 1) in vec4 in_color; out vec4 out_color; void main(void) { gl_Position = Projection * View * Model * vec4(in_position, 1.0); out_color = in_color; } When querying their location they are effectively reversed, I did a small test by renaming View to Piew which puts it before Projection if sorted alphabetically and the order is correct. Now if I do remove layout (location = ...) from the uniforms, the problem disappears !? I am starting to think that this is a driver bug as explained in the wiki. Do you know why the order of the uniforms is changed whenever the shader is compiled ? (using an AMD HD7850)

    Read the article

  • Creating a retro-style palette swapping effect in OpenGL

    - by Zack The Human
    I'm working on a Megaman-like game where I need to change the color of certain pixels at runtime. For reference: in Megaman when you change your selected weapon then main character's palette changes to reflect the selected weapon. Not all of the sprite's colors change, only certain ones do. This kind of effect was common and quite easy to do on the NES since the programmer had access to the palette and the logical mapping between pixels and palette indices. On modern hardware, though, this is a bit more challenging because the concept of palettes is not the same. All of my textures are 32-bit and do not use palettes. There are two ways I know of to achieve the effect I want, but I'm curious if there are better ways to achieve this effect easily. The two options I know of are: Use a shader and write some GLSL to perform the "palette swapping" behavior. If shaders are not available (say, because the graphics card doesn't support them) then it is possible to clone the "original" textures and generate different versions with the color changes pre-applied. Ideally I would like to use a shader since it seems straightforward and requires little additional work opposed to the duplicated-texture method. I worry that duplicating textures just to change a color in them is wasting VRAM -- should I not worry about that?

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >