Search Results

Search found 5954 results on 239 pages for 'cpu cores'.

Page 125/239 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Design a Distributed System

    - by Bonton255
    I am preparing for an interview on Distributed Systems. I have gone through a lot of text and understand the basics of the area. However, I need some examples of discussions on designing a distributed system given a scenario. For example, if I were to design a distributed system to calculate if a number N is primary or not, what will the be design of the system, what will be the impact of network latency, CPU performance, node failure, addition of nodes, time synchronization etc. If you guys could present your in-depth thoughts on this example, or point me to some similar discussion, that would be really helpful.

    Read the article

  • Lens showing only music after zeitgeist removal

    - by chris
    I can't get anything to show up in the Dash (lens?) other than music (no applications , no files) . This began when I removed zeitgeist. I've uninstalled and reinstalled, but still not working. I've also installed unity-place-files and unity-place-applications as suggested elsewhere. Under processes that are running I don't see zeitgeist.. (the original reason I wiped it out was because it was sucking up CPU). Ubuntu 12.04 Thanks in advance.

    Read the article

  • Investigate disk writes further to find out which process writes to my SSD

    - by zuba
    I try to minimize disk writes to my new SSD system drive. I'm stuck with iostat output: ~ > iostat -d 10 /dev/sdb Linux 2.6.32-44-generic (Pluto) 13.11.2012 _i686_ (2 CPU) Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 8,60 212,67 119,45 21010156 11800488 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 3,00 0,00 40,00 0 400 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,70 0,00 18,40 0 184 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,20 0,00 28,80 0 288 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 2,20 0,00 32,80 0 328 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,20 0,00 23,20 0 232 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 3,40 19,20 42,40 192 424 As I see there are writes to sdb. How can I resolve which process writes? I know about iotop, but it doesn't show which filesystem is being accessed.

    Read the article

  • Scrambled screen on 12.04 with Radeon HD 7670M/2GB when scrolling the page

    - by Mihkel
    I have Ubuntu 12.04 LTS 64 bit and I have installed proprietary drivers for my Radeon HD 7670M with 2GB memory. But if I scroll page or do anything like move a window then I get blurred screen (more like scrambled maybe) for a second and if I try to take PrtScr of it, it is goes to normal. I have tried other drivers and it does not solve my problem. And I do not want to go over 32 bit Ubuntu because I have 6 GB ram and I would lose so much of it. Also if it helps, my processor is Intel® Core™ i5-3210M CPU @ 2.50GHz × 4.

    Read the article

  • wacom bamboo connect CTL470 "no tablet detected..."

    - by LAS
    Wacom bamboo connect CTL470 - "no tablet detected ..." I downloaded and attempted to install drivers and software -- I am relatively new in ubuntu and downloaded, extracted,ran in terminal but could not successfully install drivers and software for this device. All of this took up a good deal of space on the drive. Manual compilation failed. I need some help. How do I install so as to use this device? Or please direct me to a suitable (relative beginner) "how to". i have downloaded several packages but the install fails in Software Center and Synaptic. System: HP a1220n Intel Pentium 4 CPU 2.93 GHz OS Type 32 bit Ubuntu 11.10

    Read the article

  • How to make bash script run with a latency (i.e. wait 1 sec at each iterations)?

    - by user2413
    I have this bash script; for (( i = 1 ; i <= 160 ; i++ )); do qsub myccomputations"${i}".pbs done Basically, I would prefer if there was a 1 second delay between each iteration. The reason is that at each iterations, it sends the program file mycomputation"${i}$.pbs to a core node for solving. Solving in this instance involves the use of pseudo random numbers. I suspect the RNG I use (R's) uses CPU time as seed because as things are now I get repeating pseudo random numbers (at the rate of approx 1 out of 100). So how to you ask bash to for (( i = 1 ; i <= 160 ; i++ )); do wait 1 sec qsub myccomputations"${i}".pbs done

    Read the article

  • Uncontrolled Fan and Crash

    - by RobotbeatsHuman
    I don't have sensors to properly run lm-sensors. The computer will turn on but shortly there after all the fans in it will speed way up. It stays like this for a few minutes and then the computer shuts off. Tried resetting the BIO. Went to try installing a BIOs update but it wont stay on long enough for me to try that or to do a clean install. Could this be the motherboard dying? It's mainly the CPU fan that ends up going max. after a few minutes. I checked the PSU and It's a Dell Inspiron 580. If you need more system specs just le me know.

    Read the article

  • FAQ: Creating a new LDOM domain

    - by Owen Allen
    I got a question about creating LDOM domains: "I have a Server Pool set up, and I need to create a secondary LDom domain on a machine in the pool. When I click on the machine, though, the 'create logical domain' command is grayed out. The machine still has available CPU threads and free RAM. What's going on?" This one has an easy answer. In a Server Pool, the Create Logical Domain action is under the pool's actions, rather than the individual machine's actions. This is because the Server Pool decides where to put the new domain based on the Server Pool's placement policy. So, in this case, you need to select the Server Pool in the Assets section, and then create the new domain from there.

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • Dimming the backlight is irreversible on a Samsung Q210 notebook, what do I do?

    - by user27304
    I'm new to the community, although I have been using Ubuntu since 2010. I have a Samsung Q210 notebook; Specs: Intel® Core™2 Duo CPU P8400 @ 2.26GHz × 2 4 Gigs RAM Nvidia 9200m GS (although system information in Ubuntu doesn't know) 194 GB HD OS: Ubuntu 11.10 Kernel is 3.0.0-12-generic-pae Although Samsung seems to be infamous for problems with Ubuntu, after upgrading to Oneiric, finally the FN Brightness Buttons are recognized. The only problem is, after dimming the backlight for a fixed amount of steps (3 or 4, I dare not count now because that would mean rebooting because I can't see anything), the display goes completely dark and using the FN buttons to brighten the backlight does not work anymore (before reaching that threshold, going brighter after dimming works). Now what do I do? File a bug report? If not, what then? If yes, how? Not sure... guess I should ask here first.. thanks for answering in advance.

    Read the article

  • Ubuntu won't boot and it is stuck on the loading screen

    - by Jordan March
    I had just installed it as dual boot 2 days ago, and everything was fine. I was installing some programs (i think it was Play On Linux) and I don't think the install was 100% done when the battery died. Since then it won't boot into Ubuntu; it just stays at the loading screen. I did make separate partitions for boot root home and swap. Can anyone help me get it back and running again? Even if I have to reinstall it. I just don't want to go back through getting all those apps. I'm running Ubuntu 12.10 64bit on a Acer Aspire 5750 core i3 cpu 4gb ram

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • How can I make KDE faster in Ubuntu 12.04. It's very slow

    - by Rizwan Rifan
    I installed the kubuntu-desktop package in Ubuntu 12.04 LTS, but the problem is KDE responses very slowly. If I click on an application's icon to run it, it appears after 10 seconds and sometimes does not appear at all. It hangs all the time. The cursor is almost impossible to follow because of the lag. I have read on the Internet that Unity uses more memory and CPU than KDE. But on my PC Unity runs smoothly and KDE does not. So what should I do to make KDE as fast, responsive and smooth as Unity? My specifications are as follows: RAM: 1.5 GB (DDR2) Processor: 3 GHz Dual Core Graphics Card: Intel HD graphics with 256 MB memory.

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • Trouble with 12.10 lag

    - by Brennan
    Well basicly lately I have been having lag problems with 12.10. I will post my specs, but before the update to 12.10, it said that I had intel graphics. Now it says I have Gallium. My specs: *Memory: 3.9 GiB *Processor: Pentium(R) Dual-Core CPU E5500 @ 2.80GHz × 2 *Graphics: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) (used to say intel graphics) *OS Type: 32-bit *Disk: 486.1 GB The output of the command sudo apt-cache check is this: E: Invalid operation check The output of the command sudo lspci -nnk | grep -A5 VGA is this: 00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e32] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:836d] Kernel driver in use: i915 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 01) Subsystem: ASUSTeK Computer Inc. Device [1043:8445] Kernel driver in use: snd_hda_intel

    Read the article

  • Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits pendant un mois à l'occasion des TechDays

    Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits Pendant un mois à l'occasion des TechDays Ikoula avait déjà lancé une promotion sur son offre Flex'Server en proposant 500 seveurs gratuits. Aujourd'hui, l'entreprise renouvelle son opération à l'occasion des TechDays et relance son offre. Elle comprend désormais de nouvelles ressources à prix privilégié. Quatre configurations sont disponibles : de ½ à 4 CPU, de 256 Mo à 2 Go de RAM, et de 10 à 80 Go de disque dur. A l'occasion des TechDays, 1 000 Flex'Servers sont offerts pendant un mois. Après le premier mois, ces serveurs dédiés virtuels sont facturés à partir de 5.99€ HT/moi...

    Read the article

  • Critical Patch Update for October 2012 Now Available

    - by Steven Chan (Oracle Development)
    The Critical Patch Update (CPU) for October 2012 was released on July 16, 2012. Oracle strongly recommends applying the patches as soon as possible. The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied. Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information. The Critical Patch Update Advisory is available at the following location: Oracle Technology Network The next four Critical Patch Update release dates are: January 15, 2013 April 16, 2013 July 16, 2013 October 15, 2013 E-Business Suite Releases 11i and 12 Reference Oracle E-Business Suite Releases 11i and 12 Critical Patch Update Knowledge Document (October 2012) (Note 1486535.1)

    Read the article

  • Microsoft Developers Development Laptops [closed]

    - by FidEliO
    Possible Duplicate: What should I be focusing on when building a development PC? I am a Microsoft Developer on Sharepoint and ASP.NET. I am tring to buy a new laptop since the one that I have is an old one. From my point of view, Microsoft Development tools are becomming more and more resource-consuming (I don't find a suitable reason for it though). So I thought I would go for a Lenovo U260 i-7. I do not know exactly if it is going to meet my requirement so that is why I wanted to ask specifically Microsoft Developers about the specification of CPU, RAM, and Storage Disk. Thanks in advance

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • Blank Processes (?) in Natty Narwhal

    - by A Hylian Human
    I've noticed that there a seemingly blank processes (no process name, no cmdline info, only an ID), which also appear to cause my CPU to be running like crazy. My fans are going pretty much full speed and I have no idea what to do. Restarting does not help. Whenever I try to kill the process IDs, nothing happens. It's like new blank processes are continuously being created. I am really surprised that I am able to write up this question without Firefox lagging like crazy (and trust me, it's not Firefox causing the issue, as far as I can tell).

    Read the article

  • Grid based collision - How many cells?

    - by Fibericon
    The game I'm creating is a bullet hell game, so there can be quite a few objects on the screen at any given time. It probably maxes out at about 40 enemies and 200 or so bullets. That being said, I'm splitting up the playing field into a grid for my collision checking. Right now, it's only 8 cells. How many would be optimal? I'm worried that if I use too many, I'll be wasting CPU power. My main concern is processing power, to make the game run smoothly. RAM is not a big concern for me.

    Read the article

  • Ubuntu 12.10 live-usb won't boot

    - by user109175
    I own an Aleutia "Tango" low power PC. It uses a Intel Atom CPU N2800 @1.86GHz processor. I know there are Linux issues with the "Cedar Trail" graphics on this hardware but I can happily run Ubuntu 12.04 using the VESA graphics driver. I wanted to try out Ubuntu 12.10 so I created a live USB under 12.04 but I am unable to get this to boot. It gets as far as the Grub screen but after that the monitor shuts down. I have tried the F6 "nomodeset" option but that doesn't make any difference. Does anyone have any knowledge of this problem? Thanks in advance.

    Read the article

  • Alternative to Firefox's Tab Groups feature for Chromium/Chrome

    - by Halkinn
    Firefox keeps crashing the whole time on my Lubuntu 12.04 since version 12, I don't know why, I am running it on a Pentium IV desktop so might be CPU shortage, however I use the same set of extensions and configurations that I have on Windows' Firefox and it rarely crashes, runs smoothly and besides on Windows it can handle much more tabs opened before some freeze actually happens. Chromium is working better so far on Lubuntu, but I really do miss the Tab Groups Firefox feature, which is great to group tabs and organize them, it really is a boost on my productivity. Are you aware of any add-on which is similar for Chrome/Chromium? I've searched around on Chrome's Web Store but no luck at the moment.

    Read the article

  • Beginner: How to Make Explorer Always Show the Full Path in Windows 8

    - by Taylor Gibb
    In older versions of Windows the Title Bar used to display your current location in the file system. In Windows 8 this is not the default behavior, however, you can enable it if you wish to. Display the Full Path in the Windows Explorer Title Bar Press the Windows + E keyboard combination to open Windows Explorer and then switch over to the View tab. On the right-hand side click on options and then select Change folder and search options from the drop-down. When the Folder Options dialog opens, switch over to the View options. Here you will need to tick the Display the full path in the title bar check box. That’s all there is to it. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >