Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 344/795 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Need a bounding box for CCSprite that includes all children/subchildren

    - by prototypical
    I have a CCSprite that has CCSprite children, and those CCSprite children have CCSprite children. The contentSize property doesn't seem to include all children/subchildren, and seems to only work for the base node. I could write a recursive method to traverse a CCSprite for all children/subchildren and calculate a proper boundingbox, but am curious as to if I am missing something and it's possible to get that information without doing so. I'l be a little surprised if such a method doesn't exist, but I can't seem to find it.

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • Normal map lighting bug in bottom right quadrant

    - by Ryan Capote
    I am currently working on getting normal maps working in my project, and have run into a problem with lighting. As you can see, the normals in the bottom right quadrant of the lighting isn't calculating the correct direction to the light or something. Best seen by the red light If I use flat normals (z normal = 1.0), it seems to be working fine: normals for the tile sheet: Shader: #version 330 uniform sampler2D uDiffuseTexture; uniform sampler2D uNormalsTexture; uniform sampler2D uSpecularTexture; uniform sampler2D uEmissiveTexture; uniform sampler2D uWorldNormals; uniform sampler2D uShadowMap; uniform vec4 uLightColor; uniform float uConstAtten; uniform float uLinearAtten; uniform float uQuadradicAtten; uniform float uColorIntensity; in vec2 TexCoords; in vec2 GeomSize; out vec4 FragColor; float sample(vec2 coord, float r) { return step(r, texture2D(uShadowMap, coord).r); } float occluded() { float PI = 3.14; vec2 normalized = TexCoords.st * 2.0 - 1.0; float theta = atan(normalized.y, normalized.x); float r = length(normalized); float coord = (theta + PI) / (2.0 * PI); vec2 tc = vec2(coord, 0.0); float center = sample(tc, r); float sum = 0.0; float blur = (1.0 / GeomSize.x) * smoothstep(0.0, 1.0, r); sum += sample(vec2(tc.x - 4.0*blur, tc.y), r) * 0.05; sum += sample(vec2(tc.x - 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x - 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x - 1.0*blur, tc.y), r) * 0.15; sum += center * 0.16; sum += sample(vec2(tc.x + 1.0*blur, tc.y), r) * 0.15; sum += sample(vec2(tc.x + 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x + 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x + 4.0*blur, tc.y), r) * 0.05; return sum * smoothstep(1.0, 0.0, r); } float calcAttenuation(float distance) { float linearAtten = uLinearAtten * distance; float quadAtten = uQuadradicAtten * distance * distance; float attenuation = 1.0 / (uConstAtten + linearAtten + quadAtten); return attenuation; } vec3 calcFragPosition(void) { return vec3(TexCoords*GeomSize, 0.0); } vec3 calcLightPosition(void) { return vec3(GeomSize/2.0, 0.0); } float calcDistance(vec3 fragPos, vec3 lightPos) { return length(fragPos - lightPos); } vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) { return normalize(lightPos - fragPos); } vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) { float diffuseFactor = dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir); vec4 diffuse = vec4(0.0); vec4 lightColor = uLightColor * uColorIntensity; if(diffuseFactor > 0.0) { diffuse = vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0); diffuse *= diffuseFactor; lightColor *= diffuseFactor; } else { discard; } vec4 final = (diffuse + lightColor); if(texture2D(uWorldNormals, worldUV).g > 0.0) { return final * attenuation; } else { return final * occluded(); } } void main(void) { vec3 fragPosition = calcFragPosition(); vec3 lightPosition = calcLightPosition(); float distance = calcDistance(fragPosition, lightPosition); float attenuation = calcAttenuation(distance); vec2 worldPos = gl_FragCoord.xy / vec2(1024, 768); vec3 lightDir = calcLightDirection(fragPosition, lightPosition); lightDir = (lightDir*0.5)+0.5; float atten = calcAttenuation(distance); vec4 emissive = texture2D(uEmissiveTexture, worldPos); FragColor = calcFinalLight(worldPos, lightDir, atten) + emissive; }

    Read the article

  • Biome Transition in a Grid & Borderless World

    - by API-Beast
    I have a universe: a list of "Systems", each with their own center, type and radius. A small part of such a universe could look like this: Systems: Can be very close to a different system, e.g. overlap Can be inside another, much bigger system Can be very far away from any other systems Spawn system specific entities and particles inside the system radius Have some properties like background color So far so good. However, the player can fly around freely, inside and outside of systems, in real time. How do I interpolate and determine things like the background color now, depending on camera position? E.g. if you are halfway between a green and a red system you should see a background halfway between red and green, or if you are inside a lilac system near the center and at the border of a green system you should get a mostly lilac background etc.

    Read the article

  • List of Open Source Java Games for Android

    - by BluFire
    I'm wondering if there are any more opensource games than the ones that you can plainly see when you search a list of open source games for android on google. Such as, is there a good website that has compiled open source games? I don't want an answer of "go google it" or "en.wikipedia.org/wiki/List_of_open_source_Android_applications" it gets really annoying on posts when people just give lazy answers.

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • HLSL: Pack 4 values into 32 bit float.

    - by TheBigO
    I can't find any useful information on packing 4 values into a 32 bit float in HLSL. Ideally, what I want to be able to do in HLSL is: float4 values = ... // Some values where each component is between 0 and 1. float packedValues = pack32R(values); float4 values2 = unpack32R(packedValues); I realize that there will be precision limitations, and performance tradeoffs between different precisions in different methods. I'm just wondering what ideas are out there.

    Read the article

  • IDirect3DDevice9Ex and D3DPOOL_MANAGED?

    - by bluescrn
    So I wanted to switch to IDirect3DDevice9Ex, purely for the SetFrameLatency function, as fullscreen vsynced D3D seemed to produce noticable input lag. But then it tells me 'ha ha ha! now you can't use D3DPOOL_MANAGED!': Direct3D9: (ERROR) :D3DPOOL_MANAGED is not valid with IDirect3DDevice9Ex Is this really as unpleasant as it looks (when you're relying quite heavily on managed resources) - or is there a simple solution? If it really does mean manual management of everything (reloading all static textures, VBs, and IBs on a device reset), is it worth the hassle, will IDirect3DDevice9Ex bring enough benefit to make it worth writing a new resource manager? Starting to think I must be doing something wrong, due to this: Direct3D9: (ERROR) :Lock is not supported for textures allocated with POOL_DEFAULT unless they are marked D3DUSAGE_DYNAMIC. So if I put my (static) textures in POOL_DEFAULT, they need flagging as D3DUSAGE_DYNAMIC, just because I lock them once to load the data in?

    Read the article

  • dynamic 2d texture creation in unity from script

    - by gman
    I'm coming from HTML5 and I'm used to having the 2D Canvas API I can use to generate textures. Is there anything similar in Unity3D? For example, let's say at runtime I want to render a circle, put 3 initials in the middle and then take the result and put that in a texture. In HTML5 I'd do this var initials = "GAT"; var textureWidth = 256; var textureHeight = 256; // create a canvas var c = document.createElement("canvas"); c.width = textureWidth; c.height = textureHeight; var ctx = c.getContext("2d"); // Set the origin to the center of the canvas ctx.translate(textureWidth / 2, textureHeight / 2); // Draw a yellow circle ctx.fillStyle = "rgb(255,255,0)"; // yellow ctx.beginPath(); var radius = (Math.min(textureWidth, textureHeight) - 2) / 2; ctx.arc(0, 0, radius, 0, Math.PI * 2, true); ctx.fill(); // Draw some black initials in the middle. ctx.fillStyle = "rgb(0,0,0)"; ctx.font = "60pt Arial"; ctx.textAlign = "center"; ctx.fillText(initials, 0, 30); // now I can make a texture from that var tex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, c); gl.generateMipmap(gl.TEXTURE_2D); I know I can edit individual pixels in a Unity texture but is there any higher level API for drawing to texture in unity?

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • XNA frame rate spikes in full screen mode

    - by ProgrammerAtWork
    I'm loading a simple texture and rotating it in XNA, and this works. But when I run it in full screen 1920x1080 mode I see spikes while my texture is rotating. If I run it windowed with 1920x1080 resolution, I don't get the spikes. The size of the texture does not seem to matter, I tried 512 texture size and 2048 texture size, same thing happens. Spikes in full screen, no spikes in windowed, resolution does not seem to matter, Debug or Release does not seem to do anything either. Anyone got ideas of what could be the problem? Edit: I think this problem has something to do with the vertical retrace. Set this property: _graphicsDeviceManager.SynchronizeWithVerticalRetrace = false; you'll lose vsync but it will not stutter.

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

  • How to perform efficient 2D picking in HTML5?

    - by jSepia
    I'm currently using an R-Tree for both picking and collision testing. Each entity on screen has a bounding box for collisions and a separate one for picking. Since entities may change position very frequently, both trees must be updated/reordered once per frame. While this is very efficient for collisions, because the tree is used in hundreds of collision queries every frame, I'm finding it too costly for picking, because it only gets queried when the user clicks, thus leading to a lot of wasted tree updates. What would be a more efficient way to implement picking without as much overhead?

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • How to implement explosion in OpenGL?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • How to do directional per fragment lighting in world space?

    - by user
    I am attempting to create a GLSL shader for simple, per-fragment directional light. So far, after following many tutorials, I have continually ran into the issue: my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem?

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >