Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 72/327 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • Cannot use second display with 12.04 and Intel 2000/3000

    - by Carolyn Marenger
    I am unable to get anything to display on my second monitor, or even get the system to recognize that there is a second screen. I am running Ubuntu 12.04 on a Gigabyte GA-H61M-S2PV, revision 2.0 box. The integrated chipset is an Intel 2000/3000, and there are both a D-Dub and a DVI-D display ports on the MB. This is the first operating system I have installed on this system. I have a second monitor plugged into the DVI-D port via a DVI-D to D-Sub adapter. I cannot verify that the motherboard or adapter were/are working, short of installing windows to test the theory. When I go into the "System Settings - Displays" control window, it shows one display. I have rebooted with the second monitor attached, and I have perused the BIOS settings in case it might have been disabled. So far, I have had no indication that the second monitor is recognized, not even a flicker at power on. If I swap monitors and cables between the DVI-D and D-Dub ports, the other monitor lights up, so I know the monitor and video cable are not the issue. Any suggestions would be appreciated. Thanks, Carolyn

    Read the article

  • OpenGL Drawing textured model (OBJ) black texture

    - by andrepcg
    I'm using OpenGL, Glew, GLFW and Glut to create a simple game. I've been following some tutorials and I have now a good model importer with textures (from ogldev.atspace.co.uk) but I'm having an issue with the model textures. I have a skybox with a beautiful texture as you can see in the picture That weird texture behind the helicopter (model) is the heli model that I've applied on purpose to that wall to demonstrate that specific texture is working, but not on the helicopter. I'll include the files I'm working on so you can check it out. Mesh.cpp - http://pastebin.com/pxDuKyQa Texture.cpp - http://pastebin.com/AByWjwL6 Render function + skybox - http://pastebin.com/Vivc9qnT I'm just calling mesh->Render(); before the drawSkyBox function, in the render loop. Why is the heli black when I can perfectly apply its texture to another quad? I've debugged the code and the mesh-render() call is correctly fetching the texture number and passing it to the texture-bind() function.

    Read the article

  • Ubuntu 12.04 - default Radeon driver does not work at all

    - by mumble
    I've recently upgraded to 12.04 LTS and I have an ATI Radeon HD5670. I've heard that the open source 'Radeon' driver is used by default. However, it wasn't showing anything for me. What I did was I added the 'nomodeset' option to boot up and install fglrx. But it didn't work well for me as it introduced a lot of problems (freezes/glitches). So I removed/purged fglrx and am planning to use the open source drivers instead. So my question is this: Why is my default Radeon driver not working? Is anyone having a similar issue? I've also tried using the ubuntu-x-swat driver by running the ff commands: sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update But the result was the same as the Radeon driver. Nothing shows up on system boot. Any ideas? Thanks in advance! Update Running lspci -nn | grep VGA gives me the following: 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Redwood [Radeon HD 5670] [1002:68d8]

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • 3ds Max CAT to XNA

    - by user12214
    Has anyone successfully used the CAT bone system in 3ds Max and exported the file into XNA? If so, what was your method of doing so? There are a number of methods of doing this apparently, but the ones I've tried have not worked. I used the Panda Exporter which creates a .X file. This seems to be the latest way of going about this, but when it's loaded in XNA, there is an error saying something about the bone weights. This happens when I export with and without CAT bones.

    Read the article

  • How to generate portal zones?

    - by Meow
    I'm developing a portal-based scene manager. Basically all it does is to check the portals against the camera frustum, and render their associated portal zones accordingly. Is there any way my editor can generate portal zones automatically with the user having to set the portals themselves only? For example, the Max Payne 1/2 engine ("Max-FX") only required to set the portal quads, unlike the C4 engine where you also have to explicitly set the portal zones.

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • Using glReadBuffer/glReadPixels returns black image instead of the actual image only on Intel cards

    - by cloudraven
    I have this piece of code glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer ); Which works just perfectly in all the Nvidia and AMD GPUs I have tried, but it fails in almost every single Intel built-in video that I have tried. It actually works in a very old 945GME, but fails in all the others. Instead of getting a screenshot I am actually getting a black screen. If it helps, I am working with the Doom3 Engine, and that code is derived from the built-in screen capture code. By the way, even with the original game I cannot do screen capture on those intel devices anyway. My guess is that they are not implementing the standard correctly or something. Is there a workaround for this?

    Read the article

  • How well does Intel 3000 HD work on Ubuntu?

    - by Simon
    Right now i have notebook with Nvidia 8400M GS (I know, it's not good card) and it's impossible to work normally when i'll plugin external monitor (1920x1080). Windows 7 can deal with it without problems (1440x900 on notebook + 1920x1080 external). On Ubuntu i have to choose one screen and turn off the second one. Even with only one screen Ubuntu (Unity or even Gnome3) sometimes hangs for a while, I've not found solution for this yet, but nevermind, it's probably because of my card or/and nvidia's drivers. I'm going to buy new PC, but for now only with integrated Intel 3000HD, and my question is: Should i expect similar problems with this card? Here i've found link to Intel's webpage about drivers - "only community develop them", and i'm a bit concerned. I'll use then only one monitor (the bigger one), but how well does those driver work? Are there any performance tests?

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • I want to learn to program in SDL C++where do i start? I want to learn only what i need to to start making 2d games [on hold]

    - by user2644399
    Lazyfoo of Lazyfoo.net of the SDL 2d tutorial wrote that in order for me to start game programming in SDL, I need to know these concepts well; Operators, Controls, Loops, Functions, Structures, Arrays, References, Pointers, Classes, Objects how to use a template and Bitwise and/or. I want to know the fastest way to learn as much as I need of basic c++ that would allow me to make 2d games. Thanks in advance.

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Can't Run Assault Cube

    - by Debashis Pradhan
    I installed assault cube from the Software centre and it just opens for half a second and closes. When i run in it from the terminal, this is what i get - d@d-platform:~$ assaultcube Using home directory: /home/d/.assaultcube_v1.104 current locale: en_IN init: sdl init: net init: world init: video: sdl init: video: mode X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 129 (XFree86-VidModeExtension) Minor opcode of failed request: 10 (XF86VidModeSwitchToMode) Value in failed request: 0xb3 Serial number of failed request: 131 Current serial number in output stream: 133

    Read the article

  • Why would video stutter on HDMI but not on DVI?

    - by CorvT
    I've got a system running Ubuntu 12.04 with an i3 2120T CPU/GPU. When I play video through mplayer, I notice when I'm hooked up to a screen via HDMI there is a small stutter (1-2 frames) every few seconds. I don't see this happening when I connect via DVI on the same screen. Resolution and refresh rate are same for both HDMI and DVI, so I'm not sure where else the problem could be coming from. I've also tried two different screens, and different cables. I see the stutter with either HDMI-HDMI cables, or DVI-HDMI cable with DVI from the PC and HDMI into the screen. I don't see the stutter with DVI-DVI cables, or when I use HDMI-DVI cables with HDMI from the PC to DVI into the screen. I've also tried using an AMD 5XXX series card with the open source radeon driver, and saw the same problem. I then tried an nVidia GeForce 210 card with the closed source driver, and the stutter went away. To me this smells like a driver/mesa/glx issue (since the problem went away with the nvidia card/driver), but I have no idea how to track this down.

    Read the article

  • Terrain square loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or square to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of unnecessary loading and unloading is done. Is there a general approach to this because I dont think games like skyrim have this issue? Cheers!

    Read the article

  • Why doesn't my graphic card support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. what should I do? alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

  • Nvidia Driver versions?

    - by Patrick Krenz
    I've looked all over and can't find any reason as to why or how Nvidia names their drivers. for example they have a 330.xxx/340.xxx series that are current but also a 300.xxx and i've found that they aren't always release in order by number. Here's an example on there site with version and release date 331.38 - January 13 334.16 - Feb 7 331.49 - Feb 18 I'm really confused about what driver to actually go with, a few different series versions seem to work adequately and I just want to have an understanding of it and what the best option to work from would be. I really appreciate any information

    Read the article

  • Enabling a multi display desktop completely broke Gnome Shell. Help?

    - by Chintan Parikh
    I've been trying to get my dual desktops working on Ubuntu for a while. I previously had them as one large desktop, but that was incredibly slow for some reason. I tried to switch them to multi display desktop on the AMD Catalyst Control Center. Here's what I get after restarting and logging in: http://i.imgur.com/SEjgU.png I'm running an AMD Quad Core A6, AMD Radeon 6540G2 GPU, 16GB Ram. Ubuntu 12.04 Any ideas?

    Read the article

  • OpenGL and switchable graphic cards

    - by Orcun
    I use a laptop and this laptop has readon AMD Radeon HD 6470M and onboad graphic card. When I run fglrxinfo, I get this error: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 Is it a problem ? Because of I reason I can't use opengl. Because, I can't run any opengl applications.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • Fatal X server error: Failed to submit to batchbuffer

    - by Jan
    Ubuntu 10.04 Lucid Lynx used to run fine on my computer. Since a few weeks, my X server crashes out of the blue while the computer is idle and I'm logged into a Gnome session. (I'm then greeted with a new GDM login prompt). After the crash, /var/log/gdm/:0.log.1 has the following: Fatal server error: Failed to submit batchbuffer: Input/output error Please consult the The X.Org Foundation support at http://wiki.x.org for help. ~/.xsession-errors.old has symptoms of X clinets dying: nm-applet: Fatal IO error 11 (Die Ressource ist zur Zeit nicht verfügbar) on X server :0.0. dmesg says: [191848.390081] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung [191848.390086] render error detected, EIR: 0x00000010 [191848.390088] IPEIR: 0x00000000 [191848.390090] IPEHR: 0x01800002 [191848.390091] INSTDONE: 0xffffffff [191848.390093] INSTPS: 0x8001e020 [191848.390095] INSTDONE1: 0xbfffffff [191848.390097] ACTHD: 0x0a47b014 [191848.390099] page table error [191848.390100] PGTBL_ER: 0x00000002 [191848.390103] [drm:i915_handle_error] ERROR EIR stuck: 0x00000010, masking [191848.390127] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 5617217 at 5617205) Is this a known problem that can be traced back to the X server from Ubuntu repositories? How would I debug this? Edit: There's a relevant bug on LP.

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >