Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 22/39 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How do I set the correct monitor resolution with Nvidia drivers for a monitor that does not send EDID?

    - by Torben Gundtofte-Bruun
    I keep having trouble getting the correct monitor resolution - every time I reinstall, I happen to use a newer Ubuntu release and the old tricks I used to know no longer work. Instead of leaving a long trail of questions for every new release, I am looking for a more universal and timeless solution. What's the correct way to set the correct monitor resolution with an Nvidia GPU for a screen that does not send EDID values? Note: This is a "dummy" question -- with the help from the chat, I already found the answer, and I am now going to add my own answer to document a solution that is hopefully universal.

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • What is a good method for coloring textures based on a palette in XNA?

    - by Bob
    I've been trying to work on a game with the look of an 8-bit game using XNA, specifically using the NES as a guide. The NES has a very specific palette and each sprite can use up to 4 colors from that palette. How could I emulate this? The current way I accomplish this is I have a texture with defined values which act as indexes to an array of colors I pass to the GPU. I imagine there must be a better way than this, but maybe this is the best way? I don't want to simply make sure I draw every sprite with the right colors because I want to be able to dynamically alter the palette. I'd also prefer not to alter the texture directly using the CPU.

    Read the article

  • files power_profile and power_method missing on ubuntu 12.04 after clean isntall

    - by Nikola
    OK here is the problem,I am using gnome-shell, ubuntu 12.04, kernel 3.2.0-32-generic-pae and the proprietary drivers for my ati card (Installed via "additional drivers") , the laptops is a hp 4310s probook and i want to control the power_profiles and power_method , because my GPU temp is high. before i reinstalled ubuntu 12.04, i used the .sh method on startup to write to those files, and everything worked like a charm, but now they are missing, and i can't create them.this is what i get when i try to create the directories mkdir: cannot create directory `/sys/class/drm': No such file or directory How can i can get them back?if you need some information , just ask and i will give it.

    Read the article

  • Lexmark X7170 shows documents as printed when they haven't

    - by Mehmet
    I made the move from Windows 7 to Ubuntu not dual booting because I have decided to quit gaming to spare more time for my studies. I just needed an OS for browsing the web and word processing etc. After I installed Ubuntu I installed the AMD GPU drivers, after which I clicked on the little printer icon, selected add printer and it found the drivers for the Lexmark 7000 series and I installed them. Now my problem is when I print something from Writer it processes it thinks its completed it, when in fact it hasn't printed anything. I tried printing a test page but it was stuck on processing for 5 minutes. I have restarted my computer and turned the printer on and off. I'm running 64bit if that changes anything.

    Read the article

  • DirectCompute information

    - by N0xus
    I've been trying to make use of the GPU as part of a project of mine. I've looked into both CUDA and OpenCL, but the lack of information showing you how to introduce these into a project is shocking. Even their dedicated forum groups are dead. So now, I'm looking into DirectCompute. From what I can tell, it's simply a new type of shader file that makes use of HLSL. My question is this, does my program (aside from being DirectX 10 / 11 ) need its structure changed? I mean, is it simply a case of creating the CS file, setting in the project like I would any other shader, and watch the magic happen? Any information on this would be appreciated.

    Read the article

  • How to start embedded development for developing a handheld game console?

    - by Quakeboy
    I work as a iPhone app developer now, so I know a bit of c, c++ and objective c. Also have fiddled with Java and many other. All of them have been just high level application/games development. My final goal is to make a handheld game console. More like a home made NES/SNES handheld console or even an Atari. I have found out about RaspberryPI and Arduino. But I need more information about how to approach this. 1) How Do I learn to pick the best board/cpu/controller/GPU/LCD screen/LCD controller etc? 2) Will learning to make a NES emulator first help me understand this field? If so are there any tutorials?

    Read the article

  • High temperature on my laptop with Radeon Mobility HD4670

    - by Lorthirk
    As almost everyone here, I guess, in these days I downloaded Quantal Quetzal to give it a try. However I noticed that my laptop runs fairly hot with cooling fans almost always on, even sitting in the desktop doing nothing. I downloaded XSensor to read temperature sensors, and I saw that while CPU stays on about 65°C, so quiet normal I guess, the GPU sits at 75°C. In comparison my actual Windows 7 installation, which dual boots witb Quantal, stays at 59°C CPU and 65°C. So I went reading and learned that AMD dropped support for my video card from fglrx package, and that fglrx-legacy won't support 1.13 Xorg, so I'm basically stuck with OSS drivers. So I was guessing if there's anything I can try, and if it's possible that the OSS drivers could be the cause of the high temperature?

    Read the article

  • Flash 10.1 est là : accélération matérielle et 32 failles colmatées au programme

    Mise à jour du 11/06/10 Flash 10.1 : accélération matérielle et 32 failles colmatées Flash 10.1 est là. Cette nouvelle version de Flash s'accompagne de l'arrivée de l'accélération matérielle et de la correction de 32 failles de sécurité. La première innovation devrait faire taire, du moins en partie, les critiques sur les performances de la technologie d'Adobe. L'accélération matérielle permet de lire les vidéos (H.264) en utilisant les ressources de la carte graphique (GPU) et non plus du CPU. Résultat, une lecture plus rapide et fluide, et un processeur moins impacté par l'utilisation du player. Tout ceci se passe sur le papier. E...

    Read the article

  • Chrome 18 : la 3D pour tous et amélioration de l'accélération de Canvas2D

    Chrome 18 : la 3D pour tous Et amélioration de la prise en charge de Canvas2D Chrome 18 vient de passer en version stable. Au menu, une amélioration de la prise en charge de Canvas2D qui tire parti de l'accélération matérielle (et du GPU donc). Elle devrait permettre à des applications web, comme les jeux, de tourner plus rapidement. Pour Google, avec cette prise en charge, les versions 100% Web des applicatifs pourraient même être aussi performantes que les versions traditionnelles. L'accélération matérielle appliquée à Canvas2D était jusqu'ici réservée au « beta channel » de Chrome. La fonctionnalité peut donc avoir encore quelques petits ratés. [IMG]http:/...

    Read the article

  • Black frame around screen after HDMI connection failure

    - by Wolter Hellmund
    I was trying to watch a movie in my computer through the TV, so I connected both with an HDMI cable. I was unable to have a successful setup (the colors were all weird on the TV and the screen size, incorrect), I tried many resolutions using the nvidia-settings application and somehow my screen got framed by a black border and after that I have been unable to remove it, even after restarting the computer and not being connected to the HDMI cable anymore. I am using Ubuntu 11.10 amd64, my GPU is an nVidia GeForce 8600M GT and I am using the propietary driver version 280. The problem is due to some setting with my account only. I logged in to the guest session and the resolution is right there. Also, my desktop "thinks" the resolution is right (i.e. 1280x800), but it must be right in another scale because there is pixel area occupied by the black frame.

    Read the article

  • Bumblebee optirun appears to depend on Intel

    - by user206398
    I have a Lenovo T420 with Intel and Nvidia graphics. On upgrade to Ubuntu Saucy, I had to purge and reinstall bumblebee-nvidia to get beyond optirun failing to find a GPU driver. Now, "optirun glxgears" and "optirun sol" succeed, but optirun fails on 2 Virtual Life viewers that it supported in the past, Cool VL (CoolVLViewer-1.26.8.34-Linux-x86) and Imprudence (Imprudence 1.4.0 beta2). In both cases, the error output is huge, but it starts with libGL error: failed to load driver: i965 and libGL error: failed to load driver: swrast From the little I can discover, i965 is an Intel graphics driver, which should not be invoked at all. I haven't found any information about swrast. I suspect that some of the X configuration associated with Bumblebee has some Intel dependence that is invoked on certain library calls, but not others. I haven't discovered any definite information on this line. The Cool VL Viewer runs without optirun, but complains about the insufficiency of the Intel graphics.

    Read the article

  • How many BasicEffects do you have in a Game? What is the best way to render multiple objects/shapes at once?

    - by Deukalion
    I'm trying to understand 3D rendering and it seems that everytime you render a new object (A 3D Cube or something) you need to have a new BasicEffect for each Box you render unless you want the exact same texture? ...so if I have over a hundred boxes with each different textures, I need at least as many BasicEffects? Will that not be "too much" for the CPU/GPU in the end or result in lagging? Is there any good way to render multiple objects (cubes or other shapes) at the same time? I've tried changing the BasicEffect.Texture with each cube drawn, but it resulting in changing the first Cube's texture too. Any suggestions would be really appreciated, I'm really new to 3D in XNA so I'm trying to wrap my head around the best methods for example render a Map with objects (of shapes).

    Read the article

  • XBMC is slow and sluggish in Ubuntu 13.04; how can I speed it up?

    - by Dreamdealer
    I have a Zotac ZBOX ID84 with Intel D2550 proc and Nvidia GT520M GPU, 2GB memory and 320GB hard disk. I tried XBMCbuntu first and it ran perfectly for a few months until I started to mess around with the Terminal. After a re-install I could get the HDMI sound to work again so I installed the latest version of Ubuntu (13.04) instead. That worked fine. The sound and everything worked right away, but the interface of Ubuntu is sluggish and XBMC doesn't run as good as it did in XBMCbuntu. The video playback is slow and it stutters. It speeds up and slows down with the complexity of the video. So, the PC is more than capable to run XBMC and play the videos, but something in Ubuntu (GUI?) slows it down to an unusable pace. Can anybody tell me what I can do to speed things up? Since I'm new to Ubuntu I have no clue where to start looking.

    Read the article

  • PowerXpress error with Driver Catalyst. How can I fix it?

    - by J03Bukowski
    I have install Ubuntu 11.10 64bit on my Hp Dv6-3150el. My Notebook has two graphics cards: lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 01:00.0 VGA compatible controller: ATI Technologies Inc Madison [AMD Radeon HD 5000M Series] I tried to install the proprietary graphics drivers ''fglrx'' available in "Additional Drivers", which does not give me 3D graphics acceleration (and I can't install those post-release). Then I can try to download and install from the website (I tried the version that Catalyst 11.8 11.12). The installation goes perfectly (I followed this guide and others), except that when I configure Xorg.conf file: sudo aticonfig --initial PowerXpress error: Cannot stat '/usr/lib64/fglrx': No such file or directory Failed to initialize libglx for discrete GPU

    Read the article

  • How to configure screens in console or create screen configuration profiles?

    - by uncle Lem
    I have two monitors and integrated GPU (Intel® HD Graphics 4600). It works fine for work or movies, but if I launch games in fullscreen mode - I get artifacts, glitches and so on. Temporary disabling second monitor solve this problem, but then I have to enable it back, and set its properties manually (by default, additional screen attaches its left-top corner to main monitor's right-top corner, but I need it to be left-bottom and right-bottom corners). So I need some kind of automatization here. Best option - tool to create and swap between some kind of config profiles. Or, maybe, some console manipulations which I can put into script files would be fine too. (Ubuntu 13.04, if it matters)

    Read the article

  • Installed Ubuntu 14.04. How do I get it working with my new AMD R9 280X. When I run it I get garbage on the display

    - by user289455
    I installed Ubuntu 14.04 on my Gigabyte A88X FM2+ motherboard and A10 7700K processor. Everything worked perfectly. This system is dual booted with Windows 8.1 for gaming and I installed subsequently I installed an AMD r9 280X gpu. Windows was easy. It kept working with the new card installed and I just needed to download and instal the driver. Reboot and it's done. Ubuntu boots into a blank display. I enter the password and can log in. The menu on the left and bar at the top are visible but when I try to open an app, I get blocks and rubbish on the display. My preference is not to remove the card. So, my question is: What would be the procedure I could follow where I can log in using a text based terminal and download and install the AMD drivers.

    Read the article

  • Dual 7950, freezes during startup

    - by Hiro
    I have two identical video cards right now. Xubuntu will boot up no problem if I only have one card installed. If I install the 2nd card it will just hang midway thru the loading screen. Regardless of which card I use and which position on the motherboard, it will freeze if I have both installed. I've tried installing the proprietary drivers, both fglrx and fglrx-updates, still freezes in the same spot. Uninstalled those and went back to the default ones. The only way I can get a terminal is by booting into failsafe mode. Booting using 13.10 i386 on a USB stick gives me some errors related to what I think is the 2nd video card: [drm:r600_ring_test] ERROR radeon: ring 1 test failed (scratch(0x850C)=0xCAFEDEAD) [drm:r600_dma_ring_test] ERROR radeon: ring 3 test failed (0x02FFFF6B) radeon 0000:0200.2 disabling GPU acceleration"

    Read the article

  • Y Axis inverted on vertex output

    - by Yonathan Klijnsma
    I've got my project running and somehow it seems my vertex y components are inverted. 10 in the positive on Y goes down and 10 negative on the Y axis goes up. I can't find anything with the initialization and I am not doing any negative scaling in the view matrix. I've never had something like this happen before, does anyone have some tips or things to look for ? How I am sending verteces to the GPU ( Currently intermediate mode ) glVertex3f( x_pos_n, 10, z_pos ); I am using CG in the project but even without shaders the Y axis seems to be inverted.

    Read the article

  • How can I make zaz save its profile data?

    - by RolandiXor - The Ice Man
    I've been playing Zaz recently as a time waster and stress beater, but it seems not to be actively maintained, and is not saving profile data under Ubuntu 12.10. It's getting to be more stressful than fun, because it keeps crashing, and under Unity, Gnome Shell, KDE (in other words, any opengl enabled WM) it makes the GPU lockup. How can I make it save the profile data or create a profile that I can manually place my level info in? I'm tired of playing the same levels over and over and not being able to start from the ones I've already passed. I am still yet to find any info on fixing this. Any clues?

    Read the article

  • Developing Games for Samsung Smart TV

    - by Caner Öncü
    We are planning to develop a game for Samsung Smart TVs. Although those TVs support Flash and HTML5 other specs fail at supporting a game engine. For ex: Using an engine that needs GPU is not possible with the default Samsung smart tv set. Or... WebGL is supported with Samsung SDK 4.1 but we don't know if SDK 4.1 is available for Smart TV series between 7000-9000 or not. We have tried to communicate with Samsung but they don't really seem to respond. Is there anyone who has developed a game for Samsung Smart TVs? If there is, can you name the game engines that can work with those TVs?

    Read the article

  • Computer crashing after a few minutes of logging in

    - by user88612
    I have a fresh install of 12.04.1 on my machine. It is constantly crashing after logging in. I noticed it crashed repeatedly when using Minitube and would crash in Chrome. It seems to be that anything graphics related is causing it? I've booted the system 7 times and each time it would crash within 5-10 minutes after logging in. I have no swap and it's a dual-boot system with Windows 7. My system specs: AMD Phenom XII 925 8GB DDR3 (new) AMD Radeon HD 7770 GPU (new) Seagate 1TB Barracuda HDD (new) Catalyst Drivers

    Read the article

  • What techniques can I use to render very large numbers of objects more efficiently in OpenGL?

    - by Luke
    You can think of my application as drawing a very large ball-and-stick diagram (or graph). At times, this graph can get very large, where the number of elements even outnumbers the pixels on the screen. Currently I am simply passing all of my textures (as GL_POINTS) and lines to the graphics card using VBO's. When the number of elements outnumbers the number of pixels, is this the most efficient way to do this? Or should I do some calculations on the CPU side before handing everything over to the GPU? If it matters, I do use GL_DEPTH_TEST and GL_ALPHA_TEST. I do some alpha blending, but probably not enough to make a huge performance difference. My scene can be static at times, but the user has control over a typical arc-ball camera and can pan, rotate, or zoom. It is during these operations that performance degradation is noticeable.

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • Problem installing libva (VAAPI), vainfo fails

    - by satuon
    I'm following a wiki on how to make VLC use the GPU in the Core i3 integrated graphics chipset, but I'm stuck at one of the first steps, installing VAAPI. I installed libva1 and vainfo by using "sudo apt-get install libva1 vainfo", but when I run vainfo it says libva: libva version 0.32.0 libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/dri/i965_drv_video.so libva: va_openDriver() returns -1 vaInitialize failed with error code -1 (unknown libva error),exit It should say VAProfileH264High : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD according to the wiki. /usr/lib/dri/i965_drv_video.so doesn't exist in my system, but I installed libva1 with apt-get.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >