Search Results

Search found 5756 results on 231 pages for 'cpu utilization'.

Page 166/231 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Whats a good host for an active vBulletin site?

    - by Kyle
    I've been switching hosts using a VPS each time and I'm just really not sure I'm finding the right VPS's. I've used a VPS from burst.net & rubyringtech and I just feel like it's slowly killing my site because of the slow speed. I really don't know if it's the network or the VPS itself but I really wish to fix this. When I TOP into the VPS peak times it shows this: top - 03:18:56 up 16:33, 1 user, load average: 1.33, 1.40, 1.33 Tasks: 30 total, 1 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 27.2%us, 13.6%sy, 0.0%ni, 59.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 679712k used, 368864k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached And pages take atleast a good 2-3 minutes to load. I have only like 50-60 members on the forum also. I had a shared hosting account and the forum was lightning fast.... Is a VPS a bad idea? :\ What should I do to fix this? I'm running lighttpd with xcache, and the latest mysql + php version. The server is a intel i7 2600 w/ 1gb uplink (I think the 1gb uplink is a lie because I've tested the network and the highest download speed I've seen was 20mb/s from a code.google page) All in all I've seen people talking about linode. Should I try them? I honestly don't need a dedicated server yet it's only 50-70 members online. What should I do? I really want a VPS because I enjoy root access. Does anyone have any suggestions?

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • 11.10 system crashes for no apparent reason [closed]

    - by varanoid
    I'm relatively new to Linux, and while I know a fair amount about computers and programming, it certainly isn't my specialty. I have a dual boot with Windows 7, and it has been working very well for me until recently. Just randomly the computer will freeze. The last time I was smart enough to keep the System Monitor open when it happened, and it looks like at the time of freezing "bash" and a bunch of other processes that I don't recognize seem to have flooded the memory. So it looks like my memory is getting overloaded and this is what is crashing the system, but I honestly have no idea what could be doing it. Generally I have a bunch of programs running, but they don't take much RAM or CPU: Transmission, Libre Office Writer, Firefox, Empathy, and Banshee. Sometimes I also have Text Editor and Terminal open, but it crashes regardless. When it crashes, it seems that all of the programs are working fine but things like windows, the taskbar, and operations like Alt+Tab just stop working properly or at all. Sometimes the mouse and keyboard freeze and I have to power off manually. Other than that I don't know what the problem is. The only irregularity I've experienced is that I can't download "Debian package management system" even though other updates download fine.

    Read the article

  • An online version of ClearTrace

    - by Bill Graziano
    When I visit clients for the first time and conduct a performance review I introduce them to ClearTrace. It’s still the best way I know to identify exactly which queries are consuming the most resources.  The downside is that it needs to be downloaded and create a database to store the results.  I finally decided it would be easier if I could just upload a trace immediately. You can find the online version of ClearTrace at TraceTune.com.  It provides a simple way to upload a trace file and see exactly which stored procedures or SQL statements consume the most CPU and disk.   This is still a work in progress as I try to determine exactly which features from ClearTrace are important.  I’ve also limited the file upload to 10MB in this beta release.  That might not sound like much but I get over 20,000 events using this stored procedure to generate the trace. If you’re looking for something to do on a Friday, I’d suggest a little performance tuning.  Generating 10MB of trace data doesn’t take long at all and in a short time you’ll see exactly which SQL statements you need to tune first.

    Read the article

  • How to optimize a box2d simulation in action game?

    - by nathan
    I'm working on an action game and i use box2d for physics. The game use a tiled map. I have different types of body: Static ones used for tiles Dynamic ones for player and enemies Actually i tested my game with ~150 bodies and i have a 60fps constantly on my computer but not on my mobile (android). The FPS drop as the number of body increase. After having profiled the android application, i saw that the World.step took around 8ms in CPU time to execute. Here are few things to note: Not all the world is visible on screen, i use a scrolling system Enemies are constantly moving toward the player so there is alaways to force applied to their body Enemies need to collide between each others Enemies collide with tiles I also now that i can active/desactive or sleep/awake bodies. Considering the fact that only a part of the enemies are possibly displayed on screen, is there any optimizations i can do to reduce the execution time of box2d simulation? I found a guy trying an optimization based on distance of enemies from the player (link). But i seems like he just desactives far bodies (in my case, i could desactive bodies that are not visible). But my enemies need to move even when they are not visible on screen, and applying forces will not workd on inactive bodies. Should i play with sleeping bodies here? Also, enemies are composed by two fixtures and are constantly colliding with each others and with tiles but i really never need to get notified about that. Is there anything i can do to optimize this kind of scenario? Finally, am i wrong to try to run simulation at 60FPS on mobile and should i try to make it run at 30FPS?

    Read the article

  • Ubuntu live cd : black screen and blinking cursor

    - by IFasel
    I try to install ubuntu 12.04 on my computer. I can get to the purple screen on the live cd but then, if I choose "Installing Ubuntu", I have a black screen with a cursor blinking (and nothing else happens). My PC : acer aspire M3920, CPU i5-2300, 8 Gb RAM, NVIDIA gt 405. What I already tried : I tried with 12.04 and 13.04 daily build I tried with a live usb and with a live dvd I tried the following boot options : nomodset, acpi=off I googled a lot and it seems that it could be a graphic card problem. Do you know any other boot options that I could try ? UPDATE This is not a duplicate : I've tried all the common boot options (nomodeset, noacpi...) and it doesn't change anything. With the option "no splash" (instead of "quiet splash"), I can see what happens before the forever-blinking cursor : [sdg] no caching mode present [sdg] assuming drive cache : write trough ata8.00: excetion Emask 0x52 ... frozen ata8 : SError : { RecovData RecovComm UnrecovData...} ata8.00 : failed command : IDENTIFY PACKET DEVICE ... ata8.00 : status : { DRDY } ata8 : hard resetting link Does somebody know what it means ? N.B. astonishingly, Puppy Linux boots fine (but Debian, Fedora and Ubuntu do not) Solution In fact, it was not a graphic card problem. I had to disconnect the dvd drive and connect it to another free sata connector (I don't really understand why Ubuntu had trouble with this connector and Windows 7 not). After that, everything worked fine.

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Login Screen hangs after entering password

    - by Ravi
    When I enter my password in the login box nothing happens. It gets stuck. It was running fine earlier. I have installed gnome-panel and cairo-dock, after which, I logged out, logged back in, and choose gnome classic session. I also added a ppa from the webupd8.org to install the themes (link). I opened Ubuntu Tweak tool, and changed the theme to Evolve the whole laptop stopped responding. The keyboard and mouse were unresponsive. So I was forced to do a shut down. Now whenever it starts, I cannot login into Ubuntu. I hear the fan noise, as its spinning very fast, when the laptop freezes.(Probably the CPU is running at 100%). Please help me. How can I login back to Ubuntu? I am using Ubuntu 12.04 and have installed all the latest updates.. EDIT: I want to know how can I delete a folder from the file system using the live CD?

    Read the article

  • Why do VMs need to be "stack machines" or "register machines" etc.?

    - by Prog
    (This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • WIFI/LAN not working after Installing Ubuntu 13.10

    - by user183025
    I can't connect to my WIFI after installing Ubuntu 13.10. I tried re-installing it three times (thinking that I did something wrong during the installation) but I still can't connect to my WIFI. It just gives me the message that I'm offline and that I can't connect to my WIFI. I also tried connecting to the internet using LAN cable but with the same results. I tried google but it seems that there's no solution to this yet.... Anybody knows how to solve this? Thanks! FYI: H/W path Device Class Description ====================================================== system RC530/RC730 (To be filled by O.E.M.) /0 bus RC530/RC730 /0/0 memory 64KiB BIOS /0/4 processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz /0/4/5 memory 32KiB L1 cache /0/41 memory 4GiB System Memory /0/41/0 memory DIMM [empty] /0/41/1 memory 4GiB SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) /0/100 bridge 2nd Generation Core Processor Family DRAM Controller 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 130 (rev 34) Subsystem: Intel Corporation Centrino Wireless-N 130 BGN Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7200000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Samsung Electronics Co Ltd Device c0c1 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at b000 [size=256] Memory at f2104000 (64-bit, prefetchable) [size=4K] Memory at f2100000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • Less graphics power all the sudden (Intel HD 3000)

    - by queueoverflow
    I have a Intel Sandy Bridge i5 with the HD 3000 graphics card. I used to be able to play Urban Terror and Nexuiz comfortably with 85 and 60 frames per seconds until mid/end of October 2012, the former even on a full HD display with that many frames. Now I have around 30 to 45 on the smaller laptop screen and around 20 to 30 on the external monitor. Did something happen to Kubuntu 12.04 so that it has less graphics performance than previously? Update I looked into the system monitor and could not detect anything being at the maximum. The four CPU cores were pretty much bored, the 8 GB RAM were filled with maybe 2 GB. And I ran intel_cpu_top and did not notice anything at its limit. See the output. after Kernel bisecting I now did a kernel bisect and tried 3.2.0-23, 3.2.0-27, 3.2.0-29 and 3.2.0-30 and all had full graphics power. Interestingly, I then had full power when I just booted back into the regular 3.2.0-32 kernel. This does not make sense to me …

    Read the article

  • Loign Scren hangs after entering password in Ubuntu 12.04

    - by Ravi
    When I enter my password in the login box nothing happens. It is stucked there. It was running fine earlier. Actually, I installed the package gnome-panel and cairo-dock. Then I logged out and selected gnome classic session. Then I added a ppa from the webupd8.org to install the themes(link). I opened Ubuntu Tweak tool. When I changed the theme to Evolve the whole laptop stopped responding. None of my keyboard and mouse was working. So I was forced to do a forced shutdown. Now after restart I cannot login into ubuntu. I hear a loud sound of my laptop fan when the laptop is stucked. (Probably CPU will be at 100% when it stucked). Please help me. How can I login back to ubuntu? I am using ubuntu 12.04 and have installed all the latest updates..

    Read the article

  • Kubuntu never gets to the login screen

    - by searchfgold6789
    I am not sure if Xorg is using the right device... The system in question is an A4-1400 CPU with built-in Radeon Graphics. I added a PCIe Radeon HD 5750, and changed the BIOS so it would use that card automatically. Now it appears that I can see output from the TV plugged into the HDMI port, and boot into recovery mode, but cannot get to a desktop - X doesn't start. It just freezes at the Kubuntu splash screen. I have no idea how to get X to use the proper graphics card. I pasted the Xorg.log to http://paste.ubuntu.com/6261137/. And the Xorg.conf generated by aticonfig --initial is at http://paste.ubuntu.com/6261161/ Also, see lspci and lshw -c display outputs. I can provide more information, but something does tell me X is not using the right graphics card and I've no idea where to begin how to fix it... I faced a similar issue before with another distribution but found no information anywhere on how to make X use a specific graphics card. Or maybe it's not that simple.

    Read the article

  • Simple (and fast) dices physics

    - by Markus von Broady
    I'm programming a throw of 5 dices in Actionscript 3 + AwayPhysics (BulletPhysics port). I had a lot of fun tweaking frictions, masses etc. and in the end I found best results with more physics ticks per frame. Currently I use 10 ticks per frame (1/60 s) and it's OK, though I see a difference in plus for 20 ticks. Even though it's only 5 cubes (dices) in a box (or a floor with 3 walls really) I can't simulate 20 ticks in a frame and keep FPS at 60 on a medium-aged PC. That's why I decided to precompute frames for animation, finishing it in around 1700 ticks in 2 seconds. The flash player is freezed for these 2 seconds, and I'm afraid that this result will be more of a 5 seconds or even more, if I'll simulate multi-threading and compute frames in background of some other heavy processes and CPU drawing (dices is only a part of this game). Because I want both players to see dices roll in same way, I can't compute physics when having free resources, and build a buffer for at least one throw of each type (where type is number of dices thrown). I'm afraid players will see a "preparing dices........." message too often and for too long. I think the only solution to this problem is replacing PhysicsEngine with something simpler, or creating own physicsEngine. Do You have any formulas for cube-cube and cube-wall collision detection, and for calculating how their angular and linear velocities should change after a collision occurs?

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • after upgrade from 10.04 to 12.04 cannot boot with linux 3.2.0-24-generic-pae

    - by Ricardo
    After upgrade from 10.04 to 12.04, I cannot boot with linux 3.2.0-24-generic-pae: process gets frozen in a xubuntu initialization screen (I had qimo installed). If I try the recovery mode (with the same Linux version), booting freezes after this message: Begin: Mounting root file system. If in grub menu I choose Previous Linux versions, I can boot using Linux 2.6.32-41-generic-pae. But once logged in, some things don't seem to work (apt-get update fails, update manager fails, HID menu does not provide suggestions...) (to be honest, I have no idea whether this is part of the bigger issue) Reading in Ask Ubuntu through apparently similar problems, I decided to follow some advices: got boot-repair and run it. The problem remains & I got this report. I also run as root in terminal $ sudo update-initramfs -u and this is what I got: update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic-pae cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab /tmp/mkinitramfs_EIDlHy/scripts/classmate-bottom/45xconfig: 9: .: Can't open /scripts/casper-functions What else? My pc is Intel® Core™ i7 CPU 870 @ 2.93GHz × 8, graphs is GeForce 8400 GS/PCIe/SSE2, memory is 7,8 GiB. I have two questions: Is this a bug in the newest kernel I should report? Is there anything I can do appart from a fresh install?

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

  • Ubuntu audio mysteriously stopped working (12.04)

    - by Laika
    Well, I've been a user of Ubuntu 12.04 LTS since April now, and it's been a very pleasant experience. I'm a big fan of electronic music, and I tend to have my tracks playing in the background while I do things on my laptop, either in YouTube or in Clementine, my default music player. All has worked very well until now. A couple of days ago my entire PC started to lag really badly. Almost everything was unusable. I opened up System Monitor via the terminal to find a process called "pulseaudio" using nearly 1GB of RAM and over 80% of my CPU. I needed to get some important work done and so I killed the process without thinking. Once again today, pulseaudio decided to lag the hell out of my PC, and so I killed it again. Nothing seemed to happen immediately, but once I opened up YouTube all the audio on videos stuttered a lot, while the videos played smoothly. I restarted Firefox to find that the audio was now not working at all, with both headphones and speakers, and the volume up quite a bit (it's not muted, I've checked that!). A little bit of research later and I've discovered that pulseaudio plays an important part in Ubuntu's audio. Even after restarting my PC the audio still ceases to work in any applications or with any output. The pulseaudio process refuses to start up again. So, can you help me out here? What can I do to fix my problem, and why was pulseaudio doing this in the first place?

    Read the article

  • Ubuntu 11.10 very slow compared to Windows

    - by Patrick
    I'm new to Ubuntu/linux. Since my PC is very old and not very fast with Windows 7, I decided to give Ubuntu a try, so I downloaded and installed Ubuntu 11.10 today. When I first started it, I had bad 800x600 resolution and it was very slow and annoying. So I installed a driver for my graphic card and now everything looks very nice (1280x1024).But I think it's still far slower than Windows 7. I tried to run in Ubuntu like a few people suggested on the forum but if I log in I get a black screen saying something like "this video mode cannot be displayed". I get that same screen when booting Ubuntu btw, but after about 15 seconds it disappears and just starts Ubuntu. I also installed other drivers for my graphic card but everything stayed the same. I noticed that i.e. when I open Firefox or system settings it takes about 5 seconds till it opens (while Windows 7 takes under 1 second to start i.e. Chrome) and when I do this my CPU usage gets to 100% for a short time. Computer specs: Memory: 2GB RAM Processor: Intel Pentium 4 2.8GHz Graphics Card: Nvidia GeForce 6800 400MHz. I read on various forums that 11.04 works flawless on many PCs, where 11.10 is very slow. Should I install 11.04 or could anybody help me with this problem?

    Read the article

  • How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics?

    - by PeterDC
    Background: I'm a 3D artist (as a hobby) and have recently started using Ubuntu 12.04 LTS as a dual-boot with Windows 7. It's running on my a fairly new 64-bit Toshiba laptop with an nVidia GeForce GT 540M GPU (graphics card). It also, however has Intel Integrated Graphics (which I suspect Ubuntu's been using). So, when I render my 3D scenes to images on Windows, I am able to choose between using my CPU or my nVidia GPU (faster). From the 3D application, I can set the GPU to use either CUDA or OpenCL. In Ubuntu, there's no GPU option. After doing (too much?) research on the issues with Linux and the nVidia Optimus technology, I am slightly more enlightened, but a lot more confused. I don't care one bit about the Optimus technology, as battery life is not by any means an issue for me. Here's my question: What can I do to be able to use CUDA-utilizing programs (such as Blender) on my nVidia GPU in Ubuntu? Will I need nVidia drivers? (I have heard they don't play nicely with Optimus setups on Linux.) Is there at least a way to use OpenCL on my GPU in Ubuntu?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >