Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 372/772 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • How to set a target as image [on hold]

    - by Zadalaxmi
    How to set a target as image in given code. public void addListenerForImage(final Image roomImage) { final DragAndDrop dragAndDrop = new DragAndDrop(); dragAndDrop.addSource(new DragAndDrop.Source(roomImage) { public DragAndDrop.Payload dragStart (InputEvent event, float x, float y, int pointer) { DragAndDrop.Payload payload = new DragAndDrop.Payload(); payload.setDragActor(roomImage); dragAndDrop.setDragActorPosition(-x, -y + roomImage.getHeight()); return payload; } public void dragStop (InputEvent event, float x, float y, int pointer,Target target) { roomImage.setBounds(50, 125, roomImage.getWidth(), roomImage.getHeight()); if(target != null) { roomImage.setPosition(target.getActor().getX(), target.getActor().getY()); } System.out.println(target); stage.addActor(roomImage); } }); My problem is i can drag the images and i am not able to set target as image; and target shows as null;One more if a invisible some of the images in group how can i test that it is overlapped or not;Please give some links and suggestion

    Read the article

  • Making a full-screen animation on Android? Should I use OPENGL?

    - by Roger Travis
    Say I need to make several full-screen animation that would consist of about 500+ frames each, similar to the TalkingTom app ( https://play.google.com/store/apps/details?id=com.outfit7.talkingtom2free ). Animation should be playing at a reasonable speed - supposedly not less, then 20fps - and pictures should be of a reasonable quality, not overly compressed. What method do you think should I use? So far I tried: storing each frame as a compressed JPEG before animation starts, loading each frame into a byteArray as the animation plays, decode corresponding byteArray into a bitmap and draw it on a surface view. Problem - speed is too low, usually about 5-10 FPS. I have thought of two other options. turning all animations into one movie file... but I guess there might be problems with starting, pausing and seeking to the exactly right frame... what do you think? another option I thought about was using OPENGL ( while I never worked with it before ), to play animation frame by frame. What do you think, would opengl be able to handle it? Thanks!

    Read the article

  • Java Slick2d - How to translate mouse coordinates to world coordinates

    - by Corey
    I am translating in my main class render. How do I get the mouse position where my mouse actually is after I scroll the screen public void render(GameContainer gc, Graphics g) throws SlickException { float centerX = 800/2; float centerY = 600/2; g.translate(centerX, centerY); g.translate(-player.playerX, -player.playerY); gen.render(g); player.render(g); } playerX = 800 /2 - sprite.getWidth(); playerY = 600 /2 - sprite.getHeight(); Image to help with explanation I tried implementing a camera but it seems no matter what I can't get the mouse position. I was told to do this worldX = mouseX + camX; but it didn't work the mouse was still off. Here is my Camera class if that helps: public class Camera { public float camX; public float camY; Player player; public void init() { player = new Player(); } public void update(GameContainer gc, int delta) { Input input = gc.getInput(); if(input.isKeyDown(Input.KEY_W)) { camY -= player.speed * delta; } if(input.isKeyDown(Input.KEY_S)) { camY += player.speed * delta; } if(input.isKeyDown(Input.KEY_A)) { camX -= player.speed * delta; } if(input.isKeyDown(Input.KEY_D)) { camX += player.speed * delta; } } Code used to convert mouse worldX = (int) (mouseX + cam.camX); worldY = (int) (mouseY + cam.camY);

    Read the article

  • 2D animation example in pyglet (python) looping through 2 images/sprites every x seconds

    - by Bentley4
    Suppose you have two images: step1.png and step2.png . Can anyone show me a very simple example in pyglet how to loop through those 2 images say every 0.5 seconds? The character doesn't have to move, just a simple black screen with a fixed region wherein the two images continually change every 0.5 secs. I know how to make a character move, shoot projectiles etc. but I just can't figure out how to control the looping speed of the images.

    Read the article

  • Popular genres in Asian (non-Japanese) markets?

    - by mummey
    Hello, From time-to-time I've wondered what kind of games are popular in Asia (India, China, Korea, Singapore, etc...). I hear about developers in the US and UK who outsource work there, but what goes into the games they make for themselves? Related, you hear these days about how Japanese developers have been marketing their games more for American audiences these days (with mixed success). In what ways could American developers aim their development toward Asian audiences?

    Read the article

  • Performance of pixel shaders vs. SpriteBatch: XNA

    - by ashes999
    Precondition: I read this question/answer about using shaders, or spritebatch, to render and mark a sprite. I need to do something like that. I also have a 2D lighting PoC which I need to write. The way it will work will basically be something like: Draw all the sprites Draw lighting gradients to create a lighting texture Multiply/add the lighting texture to achieve different effects (I use multiple passes of add/multiply the lighting texture to achieve different effects.) My question is really about a generalization: can I say with certainty that pixel shaders are always faster than adding/multiplying textures to the SpriteBatch? Or that adding/multiplying is always faster? Or if it's not generalizable, how do I decide which approach to take, given that I can probably code either of them? (If it matters, I'm using MonoGame 3.0 beta for Windows games)

    Read the article

  • formula for replicating glTexGen in opengl es 2.0 glsl

    - by visualjc
    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice. I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding. I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out. I've looked at the glTexGen manpage information: g = p1xo + p2yo + p3zo + p4wo What is g? Is g the value of s in the texture2d call? I've looked at the site: http://www.opengl.org/wiki/Mathematics_of_glTexGen Another size explains the same function: coord = P1*X + P2*Y + P3*Z + P4*W I don't get how coord (an UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position. varying vec4 kOutBaseTCoord; void main() { gl_Position = WVPMatrix * vec4(POSITION, 1.0); vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0); vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0); vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0); vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0); kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane); kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane); //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane); //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane); } The frag shader precision mediump float; uniform sampler2D BaseSampler; varying mediump vec4 kOutBaseTCoord; void main() { //gl_FragColor = vec4(kOutBaseTCoord.st, 0.0, 1.0); gl_FragColor = texture2D(BaseSampler, kOutBaseTCoord.st); } I've tried texture2DProj in frag shader Here are some of the other links I've looked up http://www.gamedev.net/topic/407961-texgen-not-working-with-glsl-with-fixed-pipeline-is-ok/ Thank you in advance.

    Read the article

  • using Unity Android In a sub view and add actionbar and style

    - by aeroxr1
    I exported a simple animation from Unity3D (version 4.5) in android project. With eclipse I modified the manifest and added another activity. In this activity I put a button that it makes start the animation,and this is the result. The action bar appear in the main activity but it doesn't in the unity's activity :( How can I add the action bar and the style of the first activity to unity's animation activity ? This is the unity's activity's code : package com.rabidgremlin.tut.redcube; import android.app.NativeActivity; import android.content.res.Configuration; import android.graphics.PixelFormat; import android.os.Bundle; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; import android.view.ViewGroup; import android.view.Window; import android.view.WindowManager; import com.unity3d.player.UnityPlayer; public class UnityPlayerNativeActivity extends NativeActivity { protected UnityPlayer mUnityPlayer; // don't change the name of this variable; referenced from native code // Setup activity layout @Override protected void onCreate (Bundle savedInstanceState) { //requestWindowFeature(Window.FEATURE_NO_TITLE); super.onCreate(savedInstanceState); getWindow().takeSurface(null); //setTheme(android.R.style.Theme_NoTitleBar_Fullscreen); getWindow().setFormat(PixelFormat.RGB_565); mUnityPlayer = new UnityPlayer(this); /*if (mUnityPlayer.getSettings ().getBoolean ("hide_status_bar", true)) getWindow ().setFlags (WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); */ setContentView(mUnityPlayer); mUnityPlayer.requestFocus(); } // Quit Unity @Override protected void onDestroy () { mUnityPlayer.quit(); super.onDestroy(); } // Pause Unity @Override protected void onPause() { super.onPause(); mUnityPlayer.pause(); } // eliminiamo questa onResume() e proviamo a modificare la onResume() // Resume Unity @Override protected void onResume() { super.onResume(); mUnityPlayer.resume(); } // inseriamo qualche modifica qui // This ensures the layout will be correct. @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); mUnityPlayer.configurationChanged(newConfig); } // Notify Unity of the focus change. @Override public void onWindowFocusChanged(boolean hasFocus) { super.onWindowFocusChanged(hasFocus); mUnityPlayer.windowFocusChanged(hasFocus); } // For some reason the multiple keyevent type is not supported by the ndk. // Force event injection by overriding dispatchKeyEvent(). @Override public boolean dispatchKeyEvent(KeyEvent event) { if (event.getAction() == KeyEvent.ACTION_MULTIPLE) return mUnityPlayer.injectEvent(event); return super.dispatchKeyEvent(event); } // Pass any events not handled by (unfocused) views straight to UnityPlayer @Override public boolean onKeyUp(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onTouchEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } /*API12*/ public boolean onGenericMotionEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } } And this is the AndroidManifest.xml android:versionCode="1" android:versionName="1.0" > <!-- android:theme="@android:style/Theme.NoTitleBar"--> <supports-screens android:anyDensity="true" android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:xlargeScreens="true" /> <application android:icon="@drawable/app_icon" android:label="@string/app_name" android:theme="@android:style/Theme.Holo.Light" > <activity android:name="com.rabidgremlin.tut.redcube.UnityPlayerNativeActivity" android:configChanges="mcc|mnc|locale|touchscreen|keyboard|keyboardHidden|navigation|orientation|screenLayout|uiMode|screenSize|smallestScreenSize|fontScale" android:label="@string/app_name" android:screenOrientation="portrait" > <!--android:launchMode="singleTask"--> <meta-data android:name="unityplayer.UnityActivity" android:value="true" /> <meta-data android:name="unityplayer.ForwardNativeEventsToDalvik" android:value="false" /> </activity> <activity android:name="com.rabidgremlin.tut.redcube.MainActivity" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="17" android:targetSdkVersion="19" /> <uses-feature android:glEsVersion="0x00020000" /> </manifest>

    Read the article

  • XNA `tex2Dlod` always returns transparent black

    - by feralin
    I want to sample a texture in a vertex shader, so at first I just tried using float2 texcoords = ...; color = tex2D(texture, texcoords); But apparently I cannot use tex2D in a vertex shader, and must use tex2Dlod. So then I changed the above code to color = tex2Dlod(texture, float4(texcoords, 0, 0)); But now color is always float4(0, 0, 0, 0) (i.e. transparent black). Why is this, and how can I fix it? EDIT: I know for a fact that the texture does not contain just transparent black pixels.

    Read the article

  • Heightmap generation

    - by Ziaix
    I want to implement something like this to create a heightmap: 'Place a group of coordinates evenly across a map, and give them height values within a certain range. Repeatedly create coordinates between all of those coordinates, setting their height by deriving a value that was a mean value of all the surrounding coordinates.' However, I'm not sure how I would go about it - I'm not sure how I could code the part where I place the coordinates in between the existing coordinates. Can anyone give any help/advice?

    Read the article

  • How can I write only to the stencil buffer in OpenGL ES 2.0?

    - by stephelton
    I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no-op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.

    Read the article

  • Morph a sphere to a cube and a cube to a sphere with GLSL

    - by nkint
    I'm getting started with GLSL with quartz composer. I have a patch with a particle system in which each particle is mapped into a sphere with a blend value. With blend=0 particles are in random positions, blend=1 particles are in the sphere. The code is here: vec3 sphere(vec2 domain) { vec3 range; range.x = radius * cos(domain.y) * sin(domain.x); range.y = radius * sin(domain.y) * sin(domain.x); range.z = radius * cos(domain.x); return range; } // in main: vec2 p0 = gl_Vertex.xy * twopi; vec3 normal = sphere(p0);; vec3 r0 = radius * normal; vec3 vertex = r0; normal = normal * blend + gl_Normal * (1.0 - blend); vertex = vertex * blend + gl_Vertex.xyz * (1.0 - blend); I'd like the particle to be on a cube if blend=0 I've tried to find but I can't figure out some parametric equation for the cube. Maybe it is not the right way?

    Read the article

  • Camera rotation - First Person Camera using GLM

    - by tempvar
    I've just switched from deprecated opengl functions to using shaders and GLM math library and i'm having a few problems setting up my camera rotations (first person camera). I'll show what i've got setup so far. I'm setting up my ViewMatrix using the glm::lookAt function which takes an eye position, target and up vector // arbitrary pos and target values pos = glm::vec3(0.0f, 0.0f, 10.0f); target = glm::vec3(0.0f, 0.0f, 0.0f); up = glm::vec3(0.0f, 1.0f, 0.0f); m_view = glm::lookAt(pos, target, up); i'm using glm::perspective for my projection and the model matrix is just identity m_projection = glm::perspective(m_fov, m_aspectRatio, m_near, m_far); model = glm::mat4(1.0); I send the MVP matrix to my shader to multiply the vertex position glm::mat4 MVP = camera->getProjection() * camera->getView() * model; // in shader gl_Position = MVP * vec4(vertexPos, 1.0); My camera class has standard rotate and translate functions which call glm::rotate and glm::translate respectively void camera::rotate(float amount, glm::vec3 axis) { m_view = glm::rotate(m_view, amount, axis); } void camera::translate(glm::vec3 dir) { m_view = glm::translate(m_view, dir); } and i usually just use the mouse delta position as the amount for rotation Now normally in my previous opengl applications i'd just setup the yaw and pitch angles and have a sin and cos to change the direction vector using (gluLookAt) but i'd like to be able to do this using GLM and matrices. So at the moment i have my camera set 10 units away from the origin facing that direction. I can see my geometry fine, it renders perfectly. When i use my rotation function... camera->rotate(mouseDeltaX, glm::vec3(0, 1, 0)); What i want is for me to look to the right and left (like i would with manipulating the lookAt vector with gluLookAt) but what's happening is It just rotates the model i'm looking at around the origin, like im just doing a full circle around it. Because i've translated my view matrix, shouldn't i need to translate it to the centre, do the rotation then translate back away for it to be rotating around the origin? Also, i've tried using the rotate function around the x axis to get pitch working, but as soon as i rotate the model about 90 degrees, it starts to roll instead of pitch (gimbal lock?). Thanks for your help guys, and if i've not explained it well, basically i'm trying to get a first person camera working with matrix multiplication and rotating my view matrix is just rotating the model around the origin.

    Read the article

  • Marketing iOS games (and other mobile platforms)

    - by MrDatabase
    I'd like to market my existing and/or upcoming mobile games. Specifically I want to have a "revenue sharing" agreement w/ the "marketing company"... i.e. I don't want to pay anything up front... and I'm will to give the marketing company a sizable chunk of the revenue (say up to 50%). Is a publisher the only entity that does this? Or do marketing companies exist that would be interested in this type of arrangement?

    Read the article

  • java slick2D - problem using ScalableGame class

    - by nellykvist
    I have problem adjusting the size of the screen, using the ScalableGame class from Slick2D library. So, what I want to achieve, whenever I change display size, background should adjust to screen size, and objects (images, grahpic shapes) should fit (scale). Alright, so this is how state looks by default. I can change screen size, but images and graphic shapes does not appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, ScalableGame constructor: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, to display: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, Settings.video.isFullScreen()); appGameContainer.start();

    Read the article

  • Bullet physics in python and pygame

    - by Pomg
    I am programming a 2D sidescroller in python and pygame and am having trouble making a bullet go farther than just farther than the player. The bullet travels straight to the ground after i fire it. How, in python code using pygame do I make the bullet go farther. If you need code, here is the method that handles the bullet firing: self.xv += math.sin(math.radians(self.angle)) * self.attrs['speed'] self.yv += math.cos(math.radians(self.angle)) * self.attrs['speed'] self.rect.left += self.xv self.rect.top += self.yv

    Read the article

  • Networking for RTS games with lockstep using UDP

    - by user782220
    Apparently from what I can gather Starcraft 2 moved to UDP in a patch. Now obviously with fps games there is no dispute that UDP is the only way to go. But with RTS games what benefits does UDP give over TCP given that the network model is lockstep? I suppose another way to phrase this is: what features of TCP make TCP inferior compared to UDP with resend, etc. implemented in the context of rts lockstep networking model?

    Read the article

  • OpenGL - Stack overflow if I do, Stack underflow if I don't!

    - by Wayne Werner
    Hi, I'm in a multimedia class in college, and we're "learning" OpenGL as part of the class. I'm trying to figure out how the OpenGL camera vs. modelview works, and so I found this example. I'm trying to port the example to Python using the OpenGL bindings - it starts up OpenGL much faster, so for testing purposes it's a lot nicer - but I keep running into a stack overflow error with the glPushMatrix in this code: def cube(): for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); According to this reference, that happens when the matrix stack is full. So I thought, "well, if it's full, let me just pop the matrix off the top of the stack, and there will be room". I modified the code to: def cube(): glPopMatrix() for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); And now I get a buffer underflow error - which apparently happens when the stack has only one matrix. So am I just waaay off base in my understanding? Or is there some way to increase the matrix stack size? Also, if anyone has some good (online) references (examples, etc.) for understanding how the camera/model matrices work together, I would sincerely appreciate them! Thanks!

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • Speed, delta time and movement

    - by munchor
    player.vx = scroll_speed * dt /* Update positions */ player.x += player.vx player.y += player.vy I have a delta time in miliseconds, and I was wondering how I can use it properly. I tried the above, but that makes the player go fast when the computer is fast, and the player go slow when the computer is slow. The same thing happens with jumping. The player can jump really high when the computer is faster. This is sort of unfair, I think, because. Should I be doing this someway else? Thanks.

    Read the article

  • How can I run the pixel shader effect?

    - by Yashwinder
    Stated below is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • shader coding: calculate screen coordinates of fragment

    - by Jay
    Good morning, I'm new to shader coding and trying to implement some visual effects code in shaders using billboards. (Yes, I couldn't have picked anything harder to start with, but I'm lucky that way) Setup: I have rendered the full screen z depth to an array of floats in a previous pass. In the fragment shader I need the scene depth where the rendered fragment is displayed (to see if it's occluded). I can use tex2d() to get the depth value if I have the screen coordinates of the point being rendered in the fragment shader. Question: In the fragment shader how do you calculate the screen coordinates of the pixel (in the range 0-1.0)? Is the position passed to the fragment shader a pixel offset? If so, I guess it would be: float2( position.x / screen-width, position.y / screen-height ) Thanks for any help/

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Can GMod/SFM models be converted to Unity GameObjects?

    - by Supuhstar
    Someone made a suite of GMod/SFM models available for free for people making games and videos in GMod and SFM. These are of type .dmx, .dx80.vtx, .dx90.vtx, .mdl, .phy, .sw.vtx, .vvd, .vmt, and .vtf. I fon't use GMod or SFM, so I don't know what these are, thus making it hard for me to manually convert them. Is there any way to change these into files Unity can recognize and use? I'd like to have an easy step from converting them, but I would also accept instructions on how to export them to generic mesh/skeleton/texture files, and then how to import and combine these in Unity.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >