Search Results

Search found 4224 results on 169 pages for 'dual gpu'.

Page 85/169 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • pwmconfig: "There are no pwm-capable sensor modules installed"

    - by Sman789
    I'm trying to reduce my fan speed with fancontrol and pwmanager because, despite the temperatures being the same, they are much louder on Linux (Ubuntu Gnome 14.04) than on Windows. I've followed the instructions in the first answer here but when running pwmanager I get pwmconfig: "There are no pwm-capable sensor modules installed" I know that my system has working thermal sensors because PSensor has no trouble telling me my CPU temp and GPU temp. I would appreciate any help you can give in helping me reduce my fan speed to that of Windows (which uses the ASUS AI Suite 3 software which came with the Z87-A motherboard, if that's relevant).

    Read the article

  • What the different hardware temperatures listed in psensor, sensor viewer etc reffer to?

    - by cipricus
    I have installed psensor and see a list of temperatures, but listed as ”Temperature 1”, 2, 3 etc . I can only guess where the processor is: but who's who for sure? The same question stands for Sensors Viewer. I can also type sensors in Terminal but I get no more than that acpi -t gives Thermal 0: ok, 65.0 degrees C Thermal 1: ok, 37.9 degrees C Thermal 2: ok, 56.0 degrees C Thermal 3: active, 71.0 degrees C Considering psensor, I know for a fact that: - the temperature that varies most depending on the CPU use is Temp1 and it is one of the two highest - the other high temperature is Temp4 and it goes to the ceiling when using youtube/flash - Temp2 is very stable at a medium level of 50-60 degrees Celsius - Temp3 is by far the lowest and most imobile So, I guess Temp1 is the CPU temperature, and Temp4 is the GPU temperature. Temp2 and 3 must be the motherboard and the hdd. Does anybody know for sure?

    Read the article

  • downgrade ppa packages to versions available at a previous point in time

    - by Will
    The backstory is that the normal Intel GPU drivers don't do the various OpenGL extensions that my hobby coding and some games want. So I have to install xorg-edgers and then its happy. However, last Wednesday or so there was an update to xorg-edgers - lots of packages - and it broke badly; the drivers lock up and take the whole computer with them; hard reset required. So how can you downgrade - select package versions in a PPA that represent a point in the past, ignoring versions newer than that?

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • Loud fans despite cool system under Linux (but not Windows)

    - by Sman789
    My new desktop computer runs almost silently under Windows, but the fans seem to run on a constantly high setting under Linux. Psensor shows that the GPU (with NVidia drivers) is thirty-something degrees and the CPU is about the same, so it's not just down to Linux somehow being more processor-intensive. I've read that the BIOS controls the fans under Linux, which makes sense given the high fan speeds when in BIOS as well. It's under Windows, when the ASUS AI Suite 3 software seems to take control, that the system runs more quietly and only speeds the fans up when required. So is there a Linux app which offers a similar dynamic control of the fans, or a setting hidden somewhere in the ASUS BIOS which allows the same but regardless of the OS? EDIT - I've tried using lm-sensors and fancontrol, but pwmconfig tells me "There are no pwm-capable sensor modules installed". This is after the sensors-detect command does find an 'Intel digital thermal sensor', and despite the sensors working fine in apps like psensor. Help getting this to work would likely solve the problem.

    Read the article

  • La beta de Moonlight 4 se rapproche de Silverlight 4, l'implémentation open-source ajoute accélération matérielle et support du H.264

    La beta de Moonlight 4 se rapproche de Silverlight 4 Son implémentation open-source propose désormais accélération matérielle et support du H.264 Moonlight 4 vient de sortir en version beta. L'implémentation open-source de Silverlight propose à présent l'accélération matérielle (pour la prise en charge des vidéos et de la 3D par le GPU) ou le support du codec H.264. Avec cette version de développement, Moonlight intègre plusieurs nouveautés de Silverlight 4, notamment la prise en charge des APIs de Silverlight 3 et 4. Elle permet également de construire et de faire tourner des applications « hors du navigateur ». Néanmoins, cette beta ne propose pas toutes les foncti...

    Read the article

  • How can I generate signed distance fields (2D) in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time: There's something else:

    Read the article

  • Firefox 4 : sortie de la beta 12, améliorations du support du Flash et de l'accélération matérielle

    Firefox 4 : sortie de la beta 12 Améliorations du support du Flash et de l'accélération matérielle Mise à jour du 28/02/11 La douzième ? et a priori dernière - beta de Firefox 4 est sortie ce week-end. Elle corrige 7.000 bugs et apporte une amélioration dans la lecture des vidéos (en Flash). L'intégration de l'accélération matérielle (allouer des tâches spécifiques de calcul au GPU plutôt qu'au CPU) a elle aussi été retravaillée. Le tout permettant une meilleure stabilité du navigateur. Elle n'inclut malheureusement pas encore les patchs « miracles*» qui permettent de diviser par deux son temps de démarrage (lire par ail...

    Read the article

  • What is the ideal laptop for creative coding applications?

    - by Jason
    Hi, I am a creative coder using C++(cinder and OpenFrameworks) I am looking to upgrade from my MacBook, which slowed down to about 3fps this morning. My project involves particles systems and fluids reacting to audio analysis data and computer vision data in real-time. SD or HD? no biggie. I have asked many people what computer I need. Ideally, I want a MacBook Pro. But is that enough power? I've been told that I need a desktop for what I am doing though I'd rather stay portable I've been told that I should go PC linux to get the most power but I'd rather stay mac I've been told that RAM is more of bottleneck than processor speed I've been told that the Graphics Card is more important than CPU and that code optimizations such as using trees over lists, proper threading, sending tasks to the GPU make a bigger difference than the hardware!!! what's true?! what do I need? Any suggestions are greatly appreciated

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • internal error message pops up after each time system is rebooted

    - by Biju
    I had installed ubuntu 12.04 using wubi. But each time i boot the system an internal error message pops up. As show below:- Executable path /usr/share/apport/apport-gpu-error-intel.py Package xserver-xorg-video-intel 2:2.17.0-1ubuntu4 Problem Type crash Apportversion 2.0.1-0ubuntu7 and so on.. I had earlier upgraded to ubuntu 12.04 from ubuntu 11.10. And encountered the same issue. Hence i uninstalled the OS and reinstalled using wubi. I had posted the same query in ubuntu.com/support (Question Number: 195525) But couldnt find a solution. I am using dell inspiron with intel pentium. Need ur help in resolving this issue. thanking u, Biju

    Read the article

  • 12.10 live dvd no video input

    - by mark kirby
    Hi I have been trying install Ubuntu 12.10, but as soon as it gets past my bios and to the screen with the blinking line in the top left, I get a no video input message on my tv (like when you turn the tv on with nothing connected). I have used live dvd's of both betas, alphas and daily build all with exactly the same results. Has any one else had this ? Is there a fix ? Dose this mean I can never upgrade my Ubuntu again ? (12.04 works ive been using since beta) My pc ,while old, should run this fine CPU = 2x Intel P4 HT @ 3ghz GPU = Nvidia Geforce 310 via HDMI RAM = 2 Gb DDR 2 HDD = 2 x 7200 rpm SATA Please help me I use Ubuntu exclusively on my pc and would like to keep doings so.

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How do I set the correct monitor resolution with Nvidia drivers for a monitor that does not send EDID?

    - by Torben Gundtofte-Bruun
    I keep having trouble getting the correct monitor resolution - every time I reinstall, I happen to use a newer Ubuntu release and the old tricks I used to know no longer work. Instead of leaving a long trail of questions for every new release, I am looking for a more universal and timeless solution. What's the correct way to set the correct monitor resolution with an Nvidia GPU for a screen that does not send EDID values? Note: This is a "dummy" question -- with the help from the chat, I already found the answer, and I am now going to add my own answer to document a solution that is hopefully universal.

    Read the article

  • What is a good method for coloring textures based on a palette in XNA?

    - by Bob
    I've been trying to work on a game with the look of an 8-bit game using XNA, specifically using the NES as a guide. The NES has a very specific palette and each sprite can use up to 4 colors from that palette. How could I emulate this? The current way I accomplish this is I have a texture with defined values which act as indexes to an array of colors I pass to the GPU. I imagine there must be a better way than this, but maybe this is the best way? I don't want to simply make sure I draw every sprite with the right colors because I want to be able to dynamically alter the palette. I'd also prefer not to alter the texture directly using the CPU.

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • files power_profile and power_method missing on ubuntu 12.04 after clean isntall

    - by Nikola
    OK here is the problem,I am using gnome-shell, ubuntu 12.04, kernel 3.2.0-32-generic-pae and the proprietary drivers for my ati card (Installed via "additional drivers") , the laptops is a hp 4310s probook and i want to control the power_profiles and power_method , because my GPU temp is high. before i reinstalled ubuntu 12.04, i used the .sh method on startup to write to those files, and everything worked like a charm, but now they are missing, and i can't create them.this is what i get when i try to create the directories mkdir: cannot create directory `/sys/class/drm': No such file or directory How can i can get them back?if you need some information , just ask and i will give it.

    Read the article

  • Lexmark X7170 shows documents as printed when they haven't

    - by Mehmet
    I made the move from Windows 7 to Ubuntu not dual booting because I have decided to quit gaming to spare more time for my studies. I just needed an OS for browsing the web and word processing etc. After I installed Ubuntu I installed the AMD GPU drivers, after which I clicked on the little printer icon, selected add printer and it found the drivers for the Lexmark 7000 series and I installed them. Now my problem is when I print something from Writer it processes it thinks its completed it, when in fact it hasn't printed anything. I tried printing a test page but it was stuck on processing for 5 minutes. I have restarted my computer and turned the printer on and off. I'm running 64bit if that changes anything.

    Read the article

  • How to start embedded development for developing a handheld game console?

    - by Quakeboy
    I work as a iPhone app developer now, so I know a bit of c, c++ and objective c. Also have fiddled with Java and many other. All of them have been just high level application/games development. My final goal is to make a handheld game console. More like a home made NES/SNES handheld console or even an Atari. I have found out about RaspberryPI and Arduino. But I need more information about how to approach this. 1) How Do I learn to pick the best board/cpu/controller/GPU/LCD screen/LCD controller etc? 2) Will learning to make a NES emulator first help me understand this field? If so are there any tutorials?

    Read the article

  • DirectCompute information

    - by N0xus
    I've been trying to make use of the GPU as part of a project of mine. I've looked into both CUDA and OpenCL, but the lack of information showing you how to introduce these into a project is shocking. Even their dedicated forum groups are dead. So now, I'm looking into DirectCompute. From what I can tell, it's simply a new type of shader file that makes use of HLSL. My question is this, does my program (aside from being DirectX 10 / 11 ) need its structure changed? I mean, is it simply a case of creating the CS file, setting in the project like I would any other shader, and watch the magic happen? Any information on this would be appreciated.

    Read the article

  • Flash 10.1 est là : accélération matérielle et 32 failles colmatées au programme

    Mise à jour du 11/06/10 Flash 10.1 : accélération matérielle et 32 failles colmatées Flash 10.1 est là. Cette nouvelle version de Flash s'accompagne de l'arrivée de l'accélération matérielle et de la correction de 32 failles de sécurité. La première innovation devrait faire taire, du moins en partie, les critiques sur les performances de la technologie d'Adobe. L'accélération matérielle permet de lire les vidéos (H.264) en utilisant les ressources de la carte graphique (GPU) et non plus du CPU. Résultat, une lecture plus rapide et fluide, et un processeur moins impacté par l'utilisation du player. Tout ceci se passe sur le papier. E...

    Read the article

  • Chrome 18 : la 3D pour tous et amélioration de l'accélération de Canvas2D

    Chrome 18 : la 3D pour tous Et amélioration de la prise en charge de Canvas2D Chrome 18 vient de passer en version stable. Au menu, une amélioration de la prise en charge de Canvas2D qui tire parti de l'accélération matérielle (et du GPU donc). Elle devrait permettre à des applications web, comme les jeux, de tourner plus rapidement. Pour Google, avec cette prise en charge, les versions 100% Web des applicatifs pourraient même être aussi performantes que les versions traditionnelles. L'accélération matérielle appliquée à Canvas2D était jusqu'ici réservée au « beta channel » de Chrome. La fonctionnalité peut donc avoir encore quelques petits ratés. [IMG]http:/...

    Read the article

  • High temperature on my laptop with Radeon Mobility HD4670

    - by Lorthirk
    As almost everyone here, I guess, in these days I downloaded Quantal Quetzal to give it a try. However I noticed that my laptop runs fairly hot with cooling fans almost always on, even sitting in the desktop doing nothing. I downloaded XSensor to read temperature sensors, and I saw that while CPU stays on about 65°C, so quiet normal I guess, the GPU sits at 75°C. In comparison my actual Windows 7 installation, which dual boots witb Quantal, stays at 59°C CPU and 65°C. So I went reading and learned that AMD dropped support for my video card from fglrx package, and that fglrx-legacy won't support 1.13 Xorg, so I'm basically stuck with OSS drivers. So I was guessing if there's anything I can try, and if it's possible that the OSS drivers could be the cause of the high temperature?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >