Search Results

Search found 5564 results on 223 pages for 'multi gpu'.

Page 10/223 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

  • How do I turn off nVidia high performance mode?

    - by gpen06
    I am new to Ubuntu 11.10 and never touched linux until today. I installed Ubuntu alongside with Windows 7. I have an issue with my laptop overheating and freezing when it uses the high performance GPU graphics mode (hybrid card). It is an easy fix for Windows 7. I simply set the graphics mode manually to the low performance integrated graphics (save power) mode using advanced power settings. In Ubuntu, I'm stumped at how to do this. When using Ubuntu, my laptop is showing the same symptoms as it did without the fix for Windows. I have the nVidia graphics driver installed for Windows which allows me to see which mode I am in. I downloaded it off of my laptop's website (ASUS). They do not offer driver downloads for linux. Screenshot of power settings in Windows 7

    Read the article

  • Problems booting Ubuntu 10.10 with Nvdia GeForce 6600gt

    - by SlyrNemesis
    I am quite new to Ubuntu and I already stumbled upon a problem. I run Ubuntu 10.10 Maverick Meerkat now and Install went fine, when trying to boot the screen popped black and I faced this message "gpu lockup - switching to software fbcon." When setting the display to onboard VGA through BIOS Ubuntu has no problem at all, but when I switch back to my AGP card (which is a Nvidia GeForce 6600GT) I get that message again. I do not use VGA for that card I use my DVI cable for it. Does anyone have an answer for me to make this work? Thank you

    Read the article

  • Out of memory with multi images in one picturebox

    - by Nikom
    Hi, ive problem with out of memory when im trying load few images into one picturebox. Pls Help :) public void button2_Click(object sender, EventArgs e) { FolderBrowserDialog dialog = new FolderBrowserDialog(); dialog.ShowDialog(); string selected = dialog.SelectedPath; string[] imageFileList = Directory.GetFiles(selected); int iCtr = 0,zCtr = 0; foreach(string imageFile in imageFileList) { if (Image.FromFile(imageFile) != null) { Image.FromFile(imageFile).Dispose(); } PictureBox eachPictureBox = new PictureBox(); eachPictureBox.Size = new Size(100,100); // if (iCtr % 8 == 0) //{ // zCtr++; // iCtr = 0; //} eachPictureBox.Location = new Point(iCtr * 100 + 1, 1); eachPictureBox.Image = Image.FromFile(imageFile); iCtr++; panel1.Controls.Add(eachPictureBox); } }`enter code here`

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • ValueError with multi-table inheritance in Django Admin

    - by jorde
    I created two new classes which inherit model Entry: class Entry(models.Model): LANGUAGE_CHOICES = settings.LANGUAGES language = models.CharField(max_length=2, verbose_name=_('Comment language'), choices=LANGUAGE_CHOICES) user = models.ForeignKey(User) country = models.ForeignKey(Country, null=True, blank=True) created = models.DateTimeField(auto_now=True) class Comment(Entry): comment = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) class Discount(Entry): discount = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) coupon = models.CharField(max_length=2000, blank=True, verbose_name=_('Coupon code if needed')) After adding these new models to admin via admin.site.register I'm getting ValueError when trying to create a comment or a discount via admin. Adding entries works fine. Error msg: ValueError at /admin/reviews/discount/add/ Cannot assign "''": "Discount.discount" must be a "Discount" instance. Request Method: GET Request URL: http://127.0.0.1:8000/admin/reviews/discount/add/ Exception Type: ValueError Exception Value: Cannot assign "''": "Discount.discount" must be a "Discount" instance. Exception Location: /Library/Python/2.6/site-packages/django/db/models/fields/related.py in set, line 211 Python Executable: /usr/bin/python Python Version: 2.6.1

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • T-Mobile G1 (MSM7200) GPU Memory

    - by Reflog
    Hello. I'm trying to find some information regarding the available GPU (for OpenGL) memory on the T-Mobile G1. This phone has a MSM7200 Qualcomm chip inside with ATI Imageon GPU. Unfortunately I am not able to dig any info regarding the specifics of GPU memory usage. How much memory is available in total for the textures? Is the memory shared with the CPU memory? Thanks in advance, Eli

    Read the article

  • Windows Screensaver Multi Monitor Problem

    - by Bryan
    I have a simple screensaver that I've written, which has been deployed to all of our company's client PCs. As most of our PCs have dual monitors, I took care to ensure that the screensaver ran on both displays. This works fine, however on some systems, where the primary screen has been swapped (to the left monitor), the screensaver only works on the left (primary) screen. The offending code is below. Can anyone see anything I've done wrong, or a better way of handling this? For info, the context of "this", is the screensaver form itself. // Make form full screen and on top of all other forms int minY = 0; int maxY = 0; int maxX = 0; int minX = 0; foreach (Screen screen in Screen.AllScreens) { // Find the bounds of all screens attached to the system if (screen.Bounds.Left < minX) minX = screen.Bounds.Left; if (screen.Bounds.Width > maxX) maxX = screen.Bounds.Width; if (screen.Bounds.Bottom < minY) minY = screen.Bounds.Bottom; if (screen.Bounds.Height > maxY) maxY = screen.Bounds.Height; } // Set the location and size of the form, so that it // fills the bounds of all screens attached to the system Location = new Point(minX, minY); Height = maxY - minY; Width = maxX - minX; Cursor.Hide(); TopMost = true;

    Read the article

  • Qt as a true multi-platform dev-env

    - by ruralcoder
    Inspired by the maturity problems I am facing porting on Mono Mac & Linux. I am investigating the use of Qt as an alternative. I am curious to hear about your favorite Qt experiences, tips or lesser known but useful features you know of. Please, include only one experience per answer.

    Read the article

  • .NET Remoting Client's Problem when running on the machine with Multi NICs

    - by cui chun
    I built a .NET Remoting Client which works quite fine on the machine of single NIC, and lots of testing messages received via remoting event. But when additional NIC was added, the Client seemed to be able to connect the remoting Server, but the testing messages cannot arrive anymore. From debugging, the server end did trigger the event but the client didn't get that. Checking from google and find that a similar problem report: http://www.eggheadcafe.com/community/aspnet/2/10061570/reply.aspx I just wonder if any solutions? Thanks in advance!

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

  • Problem with Eclipse and a Maven multi-module project

    - by earth
    I have created a Maven project with the following structure: + root-project pom.xml (pom) + sub-projectA (jar) + sub-projectB (jar) I have done the following steps: mvn archetype:create –DgroupId=my.group.id –DartifactId=root-project mvn archetype:create –DgroupId=my.group.id –DartifactId=sub-projectA mvn archetype:create –DgroupId=my.group.id –DartifactId=sub-projectB So I have, obviously, in the top-level pom.xml the following elements: <modules> <module>sub-projectA</module> <module>sub-projectB</module> </modules> The last step was: mvn eclipse:clean eclipse:eclipse Now if I import the root-project in Eclipse, it seems to look at my projects as resources and not like java projects. However if I import each of child projects sub-projectA and sub-projectB, it looks them like java projects. This is a big problem for me because I have a deeper hierarchy. Any help would be appreciated!

    Read the article

  • mysql multi count() in one query

    - by atno
    Hi, I'm trying to count several joined tables but without any luck, what I get is the same numbers for every column (tUsers,tLists,tItems). My query is: select COUNT(users.*) as tUsers, COUNT(lists.*) as tLists, COUNT(items.*) as tItems, companyName from users as c join lists as l on c.userID = l.userID join items as i on c.userID = i.userID group by companyID The result I want to get is --------------------------------------------- # | CompanyName | tUsers | tlists | tItems 1 | RealCoName | 5 | 2 | 15 --------------------------------------------- what modifications do i have to do to my query to get those results? Cheers

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • How to configure Multi-tenant plugin as single-tenant with Spring security plugin as resolver?

    - by Fabien Barbier
    I can create a secure, multi-tenant web app with Grails by : setup spring security plugin, setup Multi-tenant plugin (via multi-tenant install and multi-tenant-spring-security) update config.groovy : tenant { mode = "multiTenant" resolver.type = "springSecurity" } add : Integer userTenntId in User domain add a domain class for tenant Organization associate the tenants with Organization Edit BootStrap.groovy. Everything works fine in multi-tenant mode, but how to use mode = "singleTenant" ? This configuration sound not working : tenant { mode = "singleTenant" resolver.type = "springSecurity" } Edit : I try this config : tenant { mode = "singleTenant" resolver.type = "springSecurity" datasourceResolver.type = "config" dataSourceTenantMap { t1 = "jdbc:hsqldb:file:custFoo" t2 = "jdbc:hsqldb:file:custBar" } } But I get : ERROR errors.GrailsExceptionResolver - Executing action [list] of controller [org.example.TicketController] caused exception: java.lang.StackOverflowError and : Caused by: java.lang.StackOverflowError at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy:50) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.this$2$getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant$this$2$getTenantIdFromSpringSecurity.callCurrent(Unknown Source) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.get(SpringSecurityCurrentTenant.groovy:41) at com.infusion.tenant.spring.TenantBeanContainer.getBean(TenantBeanContainer.java:53) at com.infusion.tenant.spring.TenantMethodInterceptor.invoke(TenantMethodInterceptor.java:32) at $Proxy14.getConnection(Unknown Source)

    Read the article

  • Dedicated GPU in Dell PowerEdge C1100

    - by Eli Gundry
    We recently purchased a Dell PowerEdge C1100 off lease with the initention of using it for graphics processing. We installed an AMD HD 7000 series GPU in it that runs off of board power and it sends video to the display. That said, the video is very choppy, leading us to belive that the onboard video is doing all the processing and sending it to the card. Is there any way to either disable the VGA on this server or tell the OS to only use the dedicated card. More info: The server is running REHL 6.4 The graphics running the proprietary AMD drivers The video card only works in OS and does not show the BIOS on boot (we know that it's impossible to change this) Any ideas, guys? Update We are now thinking that the GPU is doing the graphics processing, but not working at the full speed of the PCI bus. Which is odd, because it is an x16 slot, but probably optimized to use a RAID card (if that makes any sense). Is there any way to remedy the choppy graphics on this server?

    Read the article

  • Brightness going up to 100% on loading certain websites in Chrome

    - by picheto
    I'm using Google Chrome version 21.0.1180.89 on Ubuntu 12.04 and my laptop is a Sony VAIO VPCCW15FL (spec sheet). My video driver is the propietary "NVIDIA accelerated graphics driver (post-release updates)(version-current updates)". After installing Ubuntu, I discovered that neither the brightness control buttons (hardware) or the brightness slider (software) worked, and found out I could get the hardware buttons to work by installing the nvidiabl.deb package and oBacklight script. I'm using nvidiabl-dkms 0.77 and oBacklight 0.3.8. Still, the slider on the Ubuntu "Settings" does not work, but I don't care. There is an annoying thing happening when loading certain pages in Google Chrome: the brightness goes up to 100% when loading the webpage or when leaving it (closing the tab or typing a different URL on the omnibox). However, the "brightness tooltip" (that default brightness notification) remembers the position it was set to, so if I adjust the brightness with the HW buttons, the level gets adjusted relative to the value it was set to before "going 100%". I disabled the flash PPAPI plugin, but left the NPAPI plugin enabled, and the problem went away for pages with flash content. Still, the same thing happens when viewing HTML5 video, or when loading, for example, the Chrome Web Store or using the Scratchpad extension. I suppose it has to do with the rendering of certain elements using the GPU, but this is just a guess. This brightness thing does not happen when using Firefox 15.0 or any other application I have used yet. Does anybody know why this may be happening and what could I do to fix this without changing browser? Thanks a lot.

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Enabling GTX 570

    - by Silas
    Hello i just built up my new system: Asus Rock Z77 Extreme 4 Intel i7 3770k 16 Gb Corsair Ram Zotac Nvidia GTX 570 bequiet! 630W Power supply 120 GB SSD So after i installed UBUNTU 12.04 64 bit. It ran smoothly. I downloaded and installed all the recommended updates. After checking the Sytem details the GTX 570 didnt show up as graphics unit. so i figured i needed to download the drivers. So i did but being a complete newbie to linux i didnt succeed in installing them. (I think) Anyway after several tries and errors i shut down the PC and restarted it. Resultung in do Signal to my screen after trying to reboot and all the monitor outs with no result i took out the graphic card and now it boots normally but after booting it says there is a problem with the system the graphics cant be recognized something something. So Question A: What do i do? I Like linux but the arbitrarity of the Errors that occur without any changes to the system scare me. Question B: Is there A beginners guide to Ubuntu where i could start from scratch because i really want this to work? Question C: Now that the System is (suddently) showing these graphic errors So far without visible consequence despite the error message. should i reinstall the GPU and give the driver installation another try or the other way around? Ill be very grateful for any help. Thank you in advance!

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >