Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 546/1567 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • C#/XNA get hardware mouse position

    - by Sunder
    I'm using C# and trying to get hardware mouse position. First thing I tryed was simple XNA functionality that is simple to use Vector2 position = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); After that i do the drawing of mouse as well, and comparing to windows hardware mouse, this new mouse with xna provided coordinates is "slacking off". By that i mean, that it is behind by few frames. For example if game is runing at 600 fps, of curse it will be responsive, but at 60 fps software mouse delay is no longer acceptable. Therefore I tried using what I thought was a hardware mouse, [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetCursorPos(out POINT lpPoint); but the result was exactly the same. I also tried geting Windows form cursor, and that was a dead end as well - worked, but with the same delay. Messing around with xna functionality: GraphicsDeviceManager.SynchronizeWithVerticalRetrace = true/false Game.IsFixedTimeStep = true/fale did change the delay time for somewhat obvious reasons, but the bottom line is that regardless it still was behind default Windows mouse. I'v seen in some games, that they provide option for hardware acelerated mouse, and in others(I think) it is already by default. Can anyone give some lead on how to achieve that.

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Collision filtering techniques

    - by Griffin
    I was wondering what efficient techniques are out there for mapping collision filtering between various bodies, sub-bodies, and so forth. I'm familiar with the simple idea of having different layers of 2D bodies, but this is not sufficient for more complex mapping: (Think of having sub-bodies of a body, such as limbs, collide with each other by placing them on the same layer, and then wanting to only have the legs collide with the ground while the arms would not) This can be solved with a multidimensional layer setup, but I would probably end up just creating more and more layers to the point where the simplicity and efficiency of layer filtering would be gone. Are there any more complex ways to solve even more complex situations than this?

    Read the article

  • Small 3D Scene Graph

    - by Alon
    I'm looking for a 3D graphics library (not a complete game engine). Preferred a scene graph. Something small (unlike jME, XNA or Unity), that I can easily expand and change. Preferred features: Cross Platform Wrriten in Java/Scala (JOGL or LWJGL), C# (preferred OpenTK), Python or JavaScript/WebGL. Support for OpenGL is a must. Direct3D is optional. Some material system Full support for some model format with full animation support (preferred COLLADA) Level of Detail (LOD) support Lighting support Shaders, GUI, Input and Terrain/Water support are also preferred, but not required Thanks!

    Read the article

  • Synchronization between game logic thread and rendering thread

    - by user782220
    How does one separate game logic and rendering? I know there seem to already be questions on here asking exactly that but the answers are not satisfactory to me. From what I understand so far the point of separating them into different threads is so that game logic can start running for the next tick immediately instead of waiting for the next vsync where rendering finally returns from the swapbuffer call its been blocking on. But specifically what data structures are used to prevent race conditions between the game logic thread and the rendering thread. Presumably the rendering thread needs access to various variables to figure out what to draw, but game logic could be updating these same variables. Is there a de facto standard technique for handling this problem. Maybe like copy the data needed by the rendering thread after every execution of the game logic. Whatever the solution is will the overhead of synchronization or whatever be less than just running everything single threaded?

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Scripts won't affect clones - Unity3d

    - by user3666251
    I made a script which swaps two game objects on click.But the script won't work because the objects are actualy clones of the original prefab. This is the script (UnityScript): #pragma strict var object1 : GameObject; var object2 : GameObject; function OnMouseDown () { Instantiate(object2,object1.transform.position,object1.transform.rotation); Destroy(object1); } I use this script to create other game objects (clones)[c#] : using UnityEngine; using System.Collections; public class Spawner : MonoBehaviour { public GameObject[] obj; public float spawnMin = 1f; public float spawnMax = 2f; // Use this for initialization void Start () { Spawn (); } void Spawn() { Instantiate(obj[Random.Range(0, obj.GetLength(0))],transform.position, Quaternion.identity); Invoke ("Spawn", Random.Range (spawnMin, spawnMax)); } } The objects get renamed to NAME (Clone). What I wanna do is make the script affect clones too.So they will swap when I click on them.

    Read the article

  • What's the future of online gamedev. FLASH or UNITY?

    - by Cpucpu
    Currently, i develop for flash, not much ago i discovered unity, not yet played with it, but i have seen so far was cool. Here are my thoughts: Flash is more casual, start with cost less, in time and money. In unity you'd likely have to go more bussines-serious (real money). There are proven bussines models in flash, like adver-gaming, ads, micro-transactions. Have not seen much movement in this in Unity, too soon maybe. Flash is too heavy. By its nature(making games) Unity is way faster. Flash is 2d, doing something 3d with it turns weird and slow. Unity is natively 3d, not optimized for 2d though, it is likely feasible as well. I am overlooking the plug-in widespread, that gap will get closed over the time.

    Read the article

  • Restricting joystick within a radius of center

    - by Phil
    I'm using Unity3d iOs and am using the example joysticks that came with one of the packages. It works fine but the area the joystick moves in is a rectangle which is unintuitive for my type of game. I can figure out how to see if the distance between the center and the current point is too far but I can't figure out how to constrain it to a certain distance without interrupting the finger tracking. Here's the relevant code: using UnityEngine; using System.Collections; public class Boundary { public Vector2 min = Vector2.zero; public Vector2 max = Vector2.zero; } public class Joystick : MonoBehaviour{ static private Joystick[] joysticks; // A static collection of all joysticks static private bool enumeratedJoysticks=false; static private float tapTimeDelta = 0.3f; // Time allowed between taps public bool touchPad; // Is this a TouchPad? public Rect touchZone; public Vector2 deadZone = Vector2.zero; // Control when position is output public bool normalize = false; // Normalize output after the dead-zone? public Vector2 position; // [-1, 1] in x,y public int tapCount; // Current tap count private int lastFingerId = -1; // Finger last used for this joystick private float tapTimeWindow; // How much time there is left for a tap to occur private Vector2 fingerDownPos; private float fingerDownTime; private float firstDeltaTime = 0.5f; private GUITexture gui; // Joystick graphic private Rect defaultRect; // Default position / extents of the joystick graphic private Boundary guiBoundary = new Boundary(); // Boundary for joystick graphic public Vector2 guiTouchOffset; // Offset to apply to touch input private Vector2 guiCenter; // Center of joystick private Vector3 tmpv3; private Rect tmprect; private Color tmpclr; public float allowedDistance; public enum JoystickType { movement, rotation } public JoystickType joystickType; public void Start() { // Cache this component at startup instead of looking up every frame gui = (GUITexture) GetComponent( typeof(GUITexture) ); // Store the default rect for the gui, so we can snap back to it defaultRect = gui.pixelInset; if ( touchPad ) { // If a texture has been assigned, then use the rect ferom the gui as our touchZone if ( gui.texture ) touchZone = gui.pixelInset; } else { // This is an offset for touch input to match with the top left // corner of the GUI guiTouchOffset.x = defaultRect.width * 0.5f; guiTouchOffset.y = defaultRect.height * 0.5f; // Cache the center of the GUI, since it doesn't change guiCenter.x = defaultRect.x + guiTouchOffset.x; guiCenter.y = defaultRect.y + guiTouchOffset.y; // Let's build the GUI boundary, so we can clamp joystick movement guiBoundary.min.x = defaultRect.x - guiTouchOffset.x; guiBoundary.max.x = defaultRect.x + guiTouchOffset.x; guiBoundary.min.y = defaultRect.y - guiTouchOffset.y; guiBoundary.max.y = defaultRect.y + guiTouchOffset.y; } } public void Disable() { gameObject.active = false; enumeratedJoysticks = false; } public void ResetJoystick() { if (joystickType != JoystickType.rotation) { //Don't do anything if turret mode // Release the finger control and set the joystick back to the default position gui.pixelInset = defaultRect; lastFingerId = -1; position = Vector2.zero; fingerDownPos = Vector2.zero; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } else { //gui.pixelInset = defaultRect; lastFingerId = -1; position = position; fingerDownPos = fingerDownPos; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } } public bool IsFingerDown() { return (lastFingerId != -1); } public void LatchedFinger( int fingerId ) { // If another joystick has latched this finger, then we must release it if ( lastFingerId == fingerId ) ResetJoystick(); } public void Update() { if ( !enumeratedJoysticks ) { // Collect all joysticks in the game, so we can relay finger latching messages joysticks = (Joystick[]) FindObjectsOfType( typeof(Joystick) ); enumeratedJoysticks = true; } //CHeck if distance is over the allowed amount //Get centerPosition //Get current position //Get distance //If over, don't allow int count = iPhoneInput.touchCount; // Adjust the tap time window while it still available if ( tapTimeWindow > 0 ) tapTimeWindow -= Time.deltaTime; else tapCount = 0; if ( count == 0 ) ResetJoystick(); else { for(int i = 0;i < count; i++) { iPhoneTouch touch = iPhoneInput.GetTouch(i); Vector2 guiTouchPos = touch.position - guiTouchOffset; bool shouldLatchFinger = false; if ( touchPad ) { if ( touchZone.Contains( touch.position ) ) shouldLatchFinger = true; } else if ( gui.HitTest( touch.position ) ) { shouldLatchFinger = true; } // Latch the finger if this is a new touch if ( shouldLatchFinger && ( lastFingerId == -1 || lastFingerId != touch.fingerId ) ) { if ( touchPad ) { tmpclr = gui.color; tmpclr.a = 0.15f; gui.color = tmpclr; lastFingerId = touch.fingerId; fingerDownPos = touch.position; fingerDownTime = Time.time; } lastFingerId = touch.fingerId; // Accumulate taps if it is within the time window if ( tapTimeWindow > 0 ) { tapCount++; print("tap" + tapCount.ToString()); } else { tapCount = 1; print("tap" + tapCount.ToString()); //Tell gameobject that player has tapped turret joystick if (joystickType == JoystickType.rotation) { //TODO: Call! } tapTimeWindow = tapTimeDelta; } // Tell other joysticks we've latched this finger foreach ( Joystick j in joysticks ) { if ( j != this ) j.LatchedFinger( touch.fingerId ); } } if ( lastFingerId == touch.fingerId ) { // Override the tap count with what the iPhone SDK reports if it is greater // This is a workaround, since the iPhone SDK does not currently track taps // for multiple touches if ( touch.tapCount > tapCount ) tapCount = touch.tapCount; if ( touchPad ) { // For a touchpad, let's just set the position directly based on distance from initial touchdown position.x = Mathf.Clamp( ( touch.position.x - fingerDownPos.x ) / ( touchZone.width / 2 ), -1, 1 ); position.y = Mathf.Clamp( ( touch.position.y - fingerDownPos.y ) / ( touchZone.height / 2 ), -1, 1 ); } else { // Change the location of the joystick graphic to match where the touch is tmprect = gui.pixelInset; tmprect.x = Mathf.Clamp( guiTouchPos.x, guiBoundary.min.x, guiBoundary.max.x ); tmprect.y = Mathf.Clamp( guiTouchPos.y, guiBoundary.min.y, guiBoundary.max.y ); //Check distance float distance = Vector2.Distance(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); float angle = Vector2.Angle(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); if (distance < allowedDistance) { //Ok gui.pixelInset = tmprect; } else { //This is where I don't know what to do... } } if ( touch.phase == iPhoneTouchPhase.Ended || touch.phase == iPhoneTouchPhase.Canceled ) ResetJoystick(); } } } if ( !touchPad ) { // Get a value between -1 and 1 based on the joystick graphic location position.x = ( gui.pixelInset.x + guiTouchOffset.x - guiCenter.x ) / guiTouchOffset.x; position.y = ( gui.pixelInset.y + guiTouchOffset.y - guiCenter.y ) / guiTouchOffset.y; } // Adjust for dead zone float absoluteX = Mathf.Abs( position.x ); float absoluteY = Mathf.Abs( position.y ); if ( absoluteX < deadZone.x ) { // Report the joystick as being at the center if it is within the dead zone position.x = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.x = Mathf.Sign( position.x ) * ( absoluteX - deadZone.x ) / ( 1 - deadZone.x ); } if ( absoluteY < deadZone.y ) { // Report the joystick as being at the center if it is within the dead zone position.y = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.y = Mathf.Sign( position.y ) * ( absoluteY - deadZone.y ) / ( 1 - deadZone.y ); } } } So the later portion of the code handles the updated position of the joystick thumb. This is where I'd like it to track the finger position in a direction it still is allowed to move (like if the finger is too far up and slightly to the +X I'd like to make sure the joystick is as close in X and Y as allowed within the radius) Thanks for reading!

    Read the article

  • XNA GUI: Creating a 'scroll pane' widget

    - by Keith Myers
    I'm trying to create a simple GUI system and am currently stuck on how to implement a textarea with a scrollbar. In other words, the text is too large to fit into the view area. I want to learn how to do this, so I'd rather not use an already rolled API. I believe this could be done if the text were part of a texture, but if the game had a lot of unique dialog, this seems expensive. I researched creating a texture on the fly and writing to it, but came up with nothing. Any suggested strategies would be appreciated. I believe it boils down to: text in a texture and how? Or something I have not thought of...

    Read the article

  • What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource?

    - by sebf
    I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff™ with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?

    Read the article

  • Sony PSM sdk and 2d Game engine

    - by Notbad
    I have started with Sony PSM sdk this week. I'm interested to create a little 2D game and have been reading through the web about a so called "2D game engine" integrated in psm. Some information I read suggested that it was going to be added on january 2012, but I have been going through the documentation and haven't been able to find any reference to it. Does anybody know if they finally introduced the 2D game engien for psm? Thanks in advance.

    Read the article

  • Should I continue reading Frank Luna's Introduction to 3D Game Programming with DirectX 11 book after D3DX and XNA Math Library have been deprecated? [on hold]

    - by milindsrivastava1997
    I recently started learning DirectX 11 (C++) by reading Frank Luna's Introduction to 3D Game Programming with DirectX 11. In that the author uses D3DX and XNA Math Library. Since they have been deprecated should I continue using that book? If yes, should I use the deprecated libraries or should I switch some other libraries? If no, which book should I consult for up-to-date content with no use of deprecated library? Thanks!

    Read the article

  • Coolbits not working

    - by usk
    I want to use coolbits to increase fan speed of my Fermi GPU. 280.13 driver installed. Ubuntu 11.10 I edited /etc/X11/xorg.conf as follows, by pressing Alt+F2 and gksu gedit /etc/X11/xorg.conf I get Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 470" Option "NoLogo" "True" Option "Coolbits" "4" EndSection Started getting these messages, Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap" So I did this, sudo aptitude install gtk2-engines-pixbuf terminal, a@z:~$ nvidia-settings -a [gpu:0]/GPUFanControlState=1 a@z:~$ nvidia-settings -q fans 1 Fan on z:0 [0] z:0[fan:0] (Fan 0) a@z:~$ nvidia-settings -a [fan:0]/GPUCurrentFanSpeed=80 ERROR: Error assigning value 80 to attribute 'GPUCurrentFanSpeed' (z:0[fan:0]) as specified in assignment '[fan:0]/GPUCurrentFanSpeed=80' (Unknown Error). So, it's not working; I can't enter the fan speed percentage. Also from NVidia X Server, there are no fan controls. http://en.gentoo-wiki.com/wiki/Nvidia#Manual_Fan_Control_for_nVIDIA_Settings

    Read the article

  • WoW lua: Getting quest attributes before the QUEST_DETAIL event

    - by Matt DiTrolio
    I'd like to determine the attributes of a quest (i.e., information provided by functions such as QuestIsDaily and IsQuestCompletable) before the player clicks on the quest detail. I'm trying to write an add-on that handles accepting and completing of daily quests with a single click on the NPC, but I'm running into a problem whereby I can't find out anything about a given quest unless the quest text is currently being displayed, defeating the purpose of the add-on. Other add-ons of this nature seem to be getting around this limitation by hard-coding information about quests, an approach I don't much like as it requires constant maintenance. It seems to me that this information must be available somehow, as the game itself can properly figure out which icon to display over the head of the NPC without player interaction. The only question is, are add-on authors allowed access to this information? If so, how? EDIT: What I originally left out was that the situations I'm trying to address are when: An NPC has multiple quests The quest detail is not the first thing that shows up upon right-click Otherwise, the situation is much simpler, as I have the information I need provided immediately.

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Resource management question. Resource containing resource

    - by bobenko
    I have resource manager handling as usual resource loading, unloading etc. With resources such an images, mesh no problem. But what to do when I have resource containing other resource (for example spriteFont contains reference to sprite and letters description). Should that sprite be added to resource manager? Or my spriteFont must be the only owner of that resource? Any thoughts on this. Have you faced with such problem? Thanks in advance.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Rain effect looks like snowfall effect?

    - by Nikhil Lamba
    i am making a game in that game i want rain effect i am little bit far from this right now i am doing like below particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But its looks like snowfall effect what changes i can do for make it rain effect please correct me EDIT : here is link for snapshot http://i.imgur.com/bRIMP.png

    Read the article

  • Sphere-Sphere intersection and Circle-Sphere intersection

    - by cagirici
    I have code for circle-circle intersection. But I need to expand it to 3-D. How do I calculate: Radius and center of the intersection circle of two spheres Points of the intersection of a sphere and a circle? Given two spheres (sc0,sr0) and (sc1,sr1), I need to calculate a circle of intersection whose center is ci and whose radius is ri. Moreover, given a sphere (sc0,sr0) and a circle (cc0, cr0), I need to calulate the two intersection points (pi0, pi1) I have checked this link and this link, but I could not understand the logic behind them and how to code them. I tried ProGAL library for sphere-sphere-sphere intersection, but the resulting coordinates are rounded. I need precise results.

    Read the article

  • Open source level editor for HTML5 platform game?

    - by Lai Yu-Hsuan
    A natty GUI editor is very helpful to create level map. I want to use some open-source choices rather than build my own from scratch. I found Tiled Map Editor but it doesn't work for what I want. Though I'm building HTML5 game, I don't have to use a HTML5 level editor as long as it can output well-formatted map files which my javascript can read. Edit: Sorry for the confusion. Tiled does not work for me because to make the player perform a 'tricky' jump, sometimes I want to set the distance between two platforms to, say, 7/3 or 8/3 tiles. But in Tiled I get only 2 or 3. If Tiled can do this, please teach me.

    Read the article

  • Camera not staying behind model while moving in circle

    - by ChocoMan
    I have a camera behind a model (3rd Person) and I'm having problems KEEPING it behind the model. When I first start my game, you see the back of the model. If the model moves forward, backward or strafe left or right, the camera moves along accordingly. When the model rotates (stationary), the camera rotates accordingly with the model still pointing at the model's back. So far, so good. The problem comes when the player is BOTH moving and rotating at the same time. Take for example a model moving in a circular pattern like running around a track. As the model moves in this motion, the model rotates slightly more with each complete rotation. Eventually, instead of looking at the model's back, eventually you will see the model in a profile view and before you know it, the model's front is facing the camera. And when you stop moving the model, the model stays in that position. So, as long as my model is stationary and rotating in one place, the camera rotates correctly. But as soon as there is any sort movement while rotating, the model is offset by a mysterious increasing amount. How can I keep the camera maintaining the same view no matter how I move AND rotate at the same time? // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { /* For rotating the model left or right. * Camera maintains distance from model * throughout rotation and if model moves * to a new position. */ Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromAxisAngle(Vector3.Up, yaw); //AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); AddPitch = Quaternion.CreateFromAxisAngle(Vector3.Up, pitch); ModelLoad.CRotation *= AddPitch; COrientation = Matrix.CreateFromQuaternion(ModelLoad.CRotation); } // Orbit (yaw) Camera around model public void cameraYaw(float yaw) { Vector3 yawAngle = ModelLoad.CameraPos - ModelLoad.camTarget; Vector3 axisYaw = Vector3.Up; ModelLoad.CameraPos = Vector3.Transform(yawAngle, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; }

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Need to combine a color, mask, and sprite layer in a shader

    - by Donutz
    My task: to display a sprite using different team colors. I have a sprte graphic, part of which has to be displayed as a team color. The color isn't 'flat', i.e. it shades from brighter to darker. I can't "pre-build" the graphics because there are just too many, so I have to generate them at runtime. I've decided to use a shader, and supply it with a texture consisting of the team color, a texture consisting of a mask (black=no color, white=full color, gray=progressively dimmed color), and the sprite grapic, with the areas where the team color shows being transparent. So here's my shader code: // Effect attempts to merge a color layer, a mask layer, and a sprite layer // to produce a complete sprite sampler UnitSampler : register(s0); // the unit sampler MaskSampler : register(s1); // the mask sampler ColorSampler : register(s2); // the color float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex1 = tex2D(ColorSampler, texCoord); // get the color float4 tex2 = tex2D(MaskSampler, texCoord); // get the mask float4 tex3 = tex2D(UnitSampler,texCoord); // get the unit float4 tex4 = tex1 * tex2.r * tex3; // color * mask * unit return tex4; } My problem is the calculations involving tex1 through tex4. I don't really understand how the manipulations work, so I'm just thrashing around, producing lots of different incorrect effects. So given tex1 through tex3, what calcs do I do in order to take the color (tex1), mask it (tex2), and apply the result to the unit if it's not zero? And would I be better off to make the mask just on/off (white/black) and put the color shading in the unit graphic?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >