Search Results

Search found 4224 results on 169 pages for 'dual gpu'.

Page 114/169 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Should ATI catalyst be installed for sake of openCL?

    - by G Sree Teja Simha
    I have a HP Envy 4 1025tx with Hybrid graphics. Although this is a 64bit system, I've installed 32bit Ubuntu on it for some reasons.(Hybrid graphics don't do well with 64bit Ubuntu.-"Some one on some forum") I had heating problems with the GPU but I've fixed them all with vgaswitcheroo. But now I wanted to use my Blender on my Ubuntu. To my surprise Blender didn't detect the dedicated 7670m card in my machine. I've confirmed with cat /sys/kernel/debug/vgaswitcheroo/switch Both IGD and DIS were up and running. I dont seem to have libopencl on my /usr/lib even though my synaptic manager says that I have installed it. I'm not quite sure what I've installed. It says that I've installed "ocl-icd-libopencl1". So my question is... Do I have opencl on my system? If not do I have to get propreitary ATI drivers for sake of opencl(fglrx wrecks up my unity totally on my system I need directions to fix it if this is the choice)? Should I get a 64bit Ubuntu installed on this system?

    Read the article

  • How can I set my resolution to 1280x1024 on an Acer Aspire Revo 3700?

    - by torbengb
    I've just set up a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. Almost everything works OK, but the graphics are not OK: the recommended Nvidia driver is activated but the monitor is not detected, so the resolution is wrong. How can I make Ubuntu detect my monitor? How can I get the proper resolution (1280x1024) in Ubuntu? I know that my monitor is not a CRT but an LCD: it's a BenQ, model T905, with 1280x1024 resolution at 60Hz, connected via a normal VGA cable. DVI or HDMI is not an option. When I go to SystemPrefsMonitors, I get: It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead? YES NO If I say NO then I get a window: or for YES I get this: In both cases I don't see that I can fix this problem. The main reason for getting this new computer was that I was sick of having graphics problems on the old one with a very ugly solution that didn't give me hardware support - but at least I got the resultion. Why is this so difficult... sigh!

    Read the article

  • Ubuntu 12.04 server froze during the first boot after it was installed.

    - by user69021
    I installed Ubuntu server 12.04 to my new server and it failed on the first boot. It just stopped and I can't proceed any further. Server's specifications: Dell PowerEdge T620 CPU : Xeon E5-2665 2.4G x 2 RAM : 8GB RDIMM, 1333MHz x 12 HDD : 3TB Near Line SAS 7.2K x 8 RAID controller : PERC H710 GPU : NVIDIA Tesla C2075 x 4 I have a screenshot of the screen it stopped on but I cannot attach it because my privilege level is currently too low. ![freeze on boot][1] Here are the last messages while booting. [5.048743] Freeing unused kernel memory : 920k freed [5.049046] Write protecting the kernel read-only data : 12288k [5.052973] Freeing unused kernel memory : 1608k freed [5.056132] Freeing unused kernel memory : 1196k freed Loading, please wait... [5.070236] udevd[218]: starting version 175 Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [5.089030] megasas: 00.00.06.12-rc1 Wed. Oct. 5 17:00:00 PDT 2011 [5.089518] megasas: 0x1000:0x005b:0x1028:0x1f35: bus 1:slot 0:func 0 [5.089739] megaraid_sas 0000:01:00.0: PCI INT A -> GSI 34 (level, low) -> IRQ 34 [5.089937] megasas: FW now in Ready state [5.090427] dca service started, version 1.12.1 [5.091463] Intel(R) Gigabit Ethernet Network Driver - version 3.2.10-k [5.091578] Copyright (c) 2007-2011 Intel Corporation. [5.091712] igb 0000:06:00.0: PCI INT A -> GSI 16 (level,low) -> IRQ 16 [5.111090] megasas:IOC Init cmd success [5.123124] usb 1-1:new high-speed USB device number 2 using ehci_hcd What can I do about this?

    Read the article

  • Switching to workspaces view shows buggy blue background

    - by G1i1ch
    When switching to workspaces view everything the background turns blue. Same happens when I switch between multiple windows of the same app too. Happening after having a try with gnome shell out of curiosity. Installed through official repos like normal. Tried it out but switched back, anyone have an idea of why this is happening? Got an Intel GPU and Unity 3d. If anyone can give me some direction, thanks a lot. Update: Looks like during the switch somehow opengl was disabled. glxinfo returns: name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". This is very distressing, all I wanted to do was try out gnome3. Does anyone have an idea on how I can get opengl back? [update] Ok I found out what happened. Apparently it's not that hard to accidentally install an nvidia driver. All I had to do was remove all the stuff that had nvidia in it and I got 3d back! For anyone else, this is how I found out: check out Xorg.0.log `$ cat / var/log/Xorg.0.log | more` if you see a line like this somewhere `(EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)` you got an nvidia problem Thanks everyone for trying to help :D

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • Why would video stutter on HDMI but not on DVI?

    - by CorvT
    I've got a system running Ubuntu 12.04 with an i3 2120T CPU/GPU. When I play video through mplayer, I notice when I'm hooked up to a screen via HDMI there is a small stutter (1-2 frames) every few seconds. I don't see this happening when I connect via DVI on the same screen. Resolution and refresh rate are same for both HDMI and DVI, so I'm not sure where else the problem could be coming from. I've also tried two different screens, and different cables. I see the stutter with either HDMI-HDMI cables, or DVI-HDMI cable with DVI from the PC and HDMI into the screen. I don't see the stutter with DVI-DVI cables, or when I use HDMI-DVI cables with HDMI from the PC to DVI into the screen. I've also tried using an AMD 5XXX series card with the open source radeon driver, and saw the same problem. I then tried an nVidia GeForce 210 card with the closed source driver, and the stutter went away. To me this smells like a driver/mesa/glx issue (since the problem went away with the nvidia card/driver), but I have no idea how to track this down.

    Read the article

  • How do I debug an overheating problem?

    - by Tab
    Hello guys. I have a problem with my Laptop (Dell Inspiron 1564 Core i5 4GB Ram VGA ATI Mobility Radeon HD 4300 running Ubuntu 10.10 32bit). It shuts down abruptly without even a lag in the application I am working with before shutdown. I think it's overheating problem. Actually the laptop is hot all the time when I am running Ubuntu. When I switch back to windows, even with intense load it won't shutdown or show any problem as long as I keep proper ventilation (when the air openings are blocked it does the same). Actually on Ubuntu i don't usually do things that need much CPU power, usually surfing internet, coding web pages and sometimes playing with python and ruby. I am not enabling desktop effects so no GPU load except the normal GNOME gui. Now as I am writing the Processor load in the panel monitor applet is 0%, Memory 11% by programs, 22% by cache. And i have CPU Frequency monitor for each of the 4 cores set to 1.20 Ghz (the lowest possible value, i am not sure if this applet does really limit CPU usage). Running sensors in terminal gave me temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) hddtemp /dev/sda at the terminal gave me /dev/sda: WDC WD3200BEVT-75ZCT2: 46°C All that fine but the laptop is Really hot i can feel it in the keyboard, mouse pad is painful to touch, and the fan is always spinning. I am also placing 2 small fans running on USB under the laptop right now and the laptop is lifted over the fans so it's well ventilated. When I am running windows it doesn't get that hot except when there is a really big load on the CPU and this is keeping me away from using Linux for everyday tasks. Actually I don't care much for speed as I can deal with low speed it's not going to shutdown abruptly. So please if you can help me and tell me what are the possible causes, where should I start ?

    Read the article

  • Logitech C510 HD Webcam related question

    - by Ashfame
    I am going to buy Logitech C510 HD webcam and I just checked on other questions here on AskUbuntu that it works out of the box with cheese. My question is can it be limited in any functionality that I would like to do with it? I would like it to be used with everything - Skype, Gtalk video chat, Facebook, Youtube etc I would like the ability to record or do a video call in lesser resolution video (its a 720p one). Also since I read that I should have a Core2Duo 2.2Ghz for 720p but I have a 2.0Ghz one, would it be possible for me to first record it and then encode it after recording if my processor really start giving issue when doing on-the-fly encoding? Anything else that I should consider? I also have a ATI HD 4850 512MB card, if it can help in encoding on-the-fly or is there a chance that my graphics card alone can handle it and those specs were just for a system without a graphics card? I believe so. Also, I got no worries in dealing with console, if I have to do some of the things above in terminal. Other possible significant details: I have a dual screen setup 29" (1360X768) & 22" (1680X1050) which might be using some good power from GPU and I have 2GB DDR2 800Mhz RAM.

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • How can I get nvidia-96 installed?

    - by Bob
    I'm at my wits end here. This is my last effort before I go back to Windows. I need to get nvidia-96 proprietary driver installed. Synaptic won't install it because it says it has dependencies. I installed every single dependency it listed except for "xorg-video-abi-10" which does not show up as an item that can be installed. I have no idea what to do. Using 11.10 with a NVIDIA Geforce 3 GPU. Anyone know how to get this dang driver installed? @fossfreedom: the opensource driver is extremely slow. So slow that the OS is unusable—words appear seconds after I type them—programs take forever to perform actions. Also it is causing my monitor to turn on and off for no reason. @yossile: synaptic shows that I have xserver-xorg-core installed. And xserver-xorg-core-udeb does not show up as something that can be installed. @papseddy: when I try to install the downloaded nvidia driver it says it won't work until I disable Nouveau kernel driver. I have tried everything to get this dang Nouveau kernel driver disabled. Nothing has been successful.

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • Trying to install Proprietory Nvidia Graphics Drivers

    - by Peter Snow
    After reading and trying many different suggestions for some hours, I returned to this how-to: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia The first problem I encounter is how to identify which of the listed drivers support my Nvidia GEForce 630M graphics card. Following the links doesn't really help, since it is not stated there either (except where support for a new driver was added later which is explicitly stated, but the original devices covered are not). However, even if I knew, if it doesn't appear in the 'Additional Drivers' dialogue (see below), how will I install it? Second Issue: The article goes on to say that available drivers for my hardware are usually listed in 'Additional Drivers'. In my case, they aren't. Unfortunately, it doesn't tell me how to correct that or work around it? I've checked the bios and there is no way offered there to disable the integrated graphics, only the Nvidia graphics. I've also tried each available option in this: $ sudo update-alternatives --config i386-linux-gnu_gl_conf My system is an Acer Aspire 4752G bought May 2012. I'm running Ubuntu 12.04LTS. uname -a : 3.2.0-38-generic-pae #61-Ubuntu SMP Tue Feb 19 12:39:51 UTC 2013 i686 i686 i386 GNU/Linux It's 64bit hardware but I installed 32bit OS for greater software compatibility. Running $ sudo tail -fn 500 /var/log/Xorg.0.log | grep '(EE)' returns" (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 28.886] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) The reason for wanting the proprietor y drivers is because my laptop comes with 3D accelerated graphics adaptor and so rather than confining myself to struggling with the on-board graphics, I would rather use it. I also want to experiment with using it for bitmining (which uses the GPU's for computing power).

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • Fatal X server error: Failed to submit to batchbuffer

    - by Jan
    Ubuntu 10.04 Lucid Lynx used to run fine on my computer. Since a few weeks, my X server crashes out of the blue while the computer is idle and I'm logged into a Gnome session. (I'm then greeted with a new GDM login prompt). After the crash, /var/log/gdm/:0.log.1 has the following: Fatal server error: Failed to submit batchbuffer: Input/output error Please consult the The X.Org Foundation support at http://wiki.x.org for help. ~/.xsession-errors.old has symptoms of X clinets dying: nm-applet: Fatal IO error 11 (Die Ressource ist zur Zeit nicht verfügbar) on X server :0.0. dmesg says: [191848.390081] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung [191848.390086] render error detected, EIR: 0x00000010 [191848.390088] IPEIR: 0x00000000 [191848.390090] IPEHR: 0x01800002 [191848.390091] INSTDONE: 0xffffffff [191848.390093] INSTPS: 0x8001e020 [191848.390095] INSTDONE1: 0xbfffffff [191848.390097] ACTHD: 0x0a47b014 [191848.390099] page table error [191848.390100] PGTBL_ER: 0x00000002 [191848.390103] [drm:i915_handle_error] ERROR EIR stuck: 0x00000010, masking [191848.390127] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 5617217 at 5617205) Is this a known problem that can be traced back to the X server from Ubuntu repositories? How would I debug this? Edit: There's a relevant bug on LP.

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • OUCH! Laptop running SUPER HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu editions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! (left side of terminal) (right side of terminal) This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • Why is rvalue write in shared memory array serialised?

    - by CJM
    I'm using CUDA 4.0 on a GPU with computing capability 2.1. One of my device functions is the following: device void test(int n, int* itemp) // itemp is shared memory pointer { const int tid = threadIdx.x; const int bdim = blockDim.x; int i, j, k; bool flag = 0; itemp[tid] = 0; for(i=tid; i<n; i+=bdim) { // { code that produces some values of "flag" } } itemp[tid] = flag; } Each thread is checking some conditions and producing a 0/1 flag. Then each thread is writing flag at the tid-th location of a shared int array. The write statement "itemp[tid] = flag;" gets serialized -- though "itemp[tid] = 0;" is not. This is causing huge performance lag which technically should not be there -- I want to avoid it. Please help.

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • Acer Aspire 5542G overheating with ubuntu/kubuntu 12.04

    - by james
    I have an Acer Aspire 5542G laptop purchased couple of years ago. All these days, i used windows 7 on it . Then I tried ubuntu 12.04 . Everything was fine except the overheating issue. I updated ubuntu with all security fixes and available updates but nothing solved the problem. With idle use like internet browsing, the cpu fan speeds up a lot and i can feel very hot air coming from the vent (comparable to playing serious 3d game in windows). But it will not go to a point of freeze and shutdown. But as long as im using it, with no intensive tasks at all, the laptop stays too hot. This wasn't the case with windows7. In windows 7 the fan will not rotate at all with normal use. I heard there was manufacturing defect with some acer laptops, but i think it wasn't the case with my laptop since windows7 runs perfectly. I updated the bios to latest version. I cleaned dust in the vents. I tried kubuntu 12.04 up-to-date. Nothing solved the issue. My laptop specs are: CPU : AMD turion2 x2 M500 @ 2.2GHz GPU : AMD Mobility Radeon HD4570 3GB RAM and 320GB hard disk.

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • Ubuntu 12.04 won't shut down - stopping winbind daemon

    - by jan
    My Precise Pangolin sometimes won't shut down - the screen is black with text on it. Mostly last line says something like "stopping winbind deamon" (sometimes also virtualbox, which is above winbind daemon; edit: sometimes the last line says "running unattended updates") and it stays like this for about ten miutes. Then I usually hold the power button for 5s to shut it down. It's very unpredictable - sometimes the computer shuts down without problem and sometimes it hangs. I've tried many ways to shut it down: HW button, panel applet, sudo shutdown -h now, sudo poweroff, sudo halt, etc. even sudo reboot or restart from panel applet have this problem. Sometimes it works ok but every method named hung at least once on the same (damned) line. My specs: FUJITSU SIEMENS LIFEBOOK E8310, Intel Core2 Duo T7300 @ 2.00GHz, 3GB RAM, GPU: Mobile Intel(R) 965 Express Chipset Family Ubuntu 12.04.2 32bit, 3.5.0-41-generic kernel (but it did it on older kernels and 12.04.x systems too). Any ideas what should I try next? Thanks a lot! Jan

    Read the article

  • AMD E-450 APU with HD-6320 graphics produces jerky videos

    - by user80424
    I try to make videos smooth playing on a Lenovo E325 laptop equipped with AMD E-450 APU. This processor have Ati HD-6320 GPU integrated. I installed ATI proprietary driver (Catalyst 12.04) as described here. Everything went fine and got no errors. However I can not play smooth HD videos. Almost every second frame has been dropped in VLC with hardware acceleration enabled. vainfo shows: libva: VA-API version 0.32.0 Xlib: extension "XFree86-DRI" missing on display ":0". libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/fglrx_drv_video.so libva: va_openDriver() returns 0 vainfo: VA-API version: 0.32 (libva 1.0.15) vainfo: Driver version: Splitted-Desktop Systems XvBA backend for VA-API - 0.7.8 vainfo: Supported profile and entrypoints VAProfileH264High : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD fglrxinfo says: display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6320 Graphics OpenGL version string: 4.2.11631 Compatibility Profile Context and fgl_glxgears produces ~250fps. Why are HD video frames dropped? CPU doesn't goes above 50% during playback.

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • Can't login to Unity 3d after enabling Xinerama for a short moment

    - by Amir Adar
    Today I connected a second monitor to my computer. I set it up using nVidia's control panel, and all was working quite well, so I figured it won't be a problem to try Xinerama, just to see the difference between that and twinview. After enabling Xinerama and restarting the X session, I saw that I was logged into a Unity 2d session. I thought it was a problem with Xinerama, so I switched back to twinview, but it still logged me into Unity 2d. I tried disconnecting the second monitor, no luck: still Unity 2d. I tried changing GPU drivers and installing drivers from a separate ppa, and still I was logged into Unity 2d. Up until this point, I didn't have any problem logging into Unity 3d. It only happened after I tried using Xinerama. I should note that I was doing all this while updates were going on in the background, so it could be something related to that, though I can't imagine what (I tried booting with another kernel, but no luck). So what exactly happened? Did changing the mode to Xinerama triggered some other changes that I'm not aware of? Did these updates cause a certain malfunction in the driver? Is it something else?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >