Search Results

Search found 5954 results on 239 pages for 'cpu cores'.

Page 125/239 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Firefox 4 : sortie de la beta 12, améliorations du support du Flash et de l'accélération matérielle

    Firefox 4 : sortie de la beta 12 Améliorations du support du Flash et de l'accélération matérielle Mise à jour du 28/02/11 La douzième ? et a priori dernière - beta de Firefox 4 est sortie ce week-end. Elle corrige 7.000 bugs et apporte une amélioration dans la lecture des vidéos (en Flash). L'intégration de l'accélération matérielle (allouer des tâches spécifiques de calcul au GPU plutôt qu'au CPU) a elle aussi été retravaillée. Le tout permettant une meilleure stabilité du navigateur. Elle n'inclut malheureusement pas encore les patchs « miracles*» qui permettent de diviser par deux son temps de démarrage (lire par ail...

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • pwmconfig: "There are no pwm-capable sensor modules installed"

    - by Sman789
    I'm trying to reduce my fan speed with fancontrol and pwmanager because, despite the temperatures being the same, they are much louder on Linux (Ubuntu Gnome 14.04) than on Windows. I've followed the instructions in the first answer here but when running pwmanager I get pwmconfig: "There are no pwm-capable sensor modules installed" I know that my system has working thermal sensors because PSensor has no trouble telling me my CPU temp and GPU temp. I would appreciate any help you can give in helping me reduce my fan speed to that of Windows (which uses the ASUS AI Suite 3 software which came with the Z87-A motherboard, if that's relevant).

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • Dimming the backlight is irreversible on a Samsung Q210 notebook, what do I do?

    - by user27304
    I'm new to the community, although I have been using Ubuntu since 2010. I have a Samsung Q210 notebook; Specs: Intel® Core™2 Duo CPU P8400 @ 2.26GHz × 2 4 Gigs RAM Nvidia 9200m GS (although system information in Ubuntu doesn't know) 194 GB HD OS: Ubuntu 11.10 Kernel is 3.0.0-12-generic-pae Although Samsung seems to be infamous for problems with Ubuntu, after upgrading to Oneiric, finally the FN Brightness Buttons are recognized. The only problem is, after dimming the backlight for a fixed amount of steps (3 or 4, I dare not count now because that would mean rebooting because I can't see anything), the display goes completely dark and using the FN buttons to brighten the backlight does not work anymore (before reaching that threshold, going brighter after dimming works). Now what do I do? File a bug report? If not, what then? If yes, how? Not sure... guess I should ask here first.. thanks for answering in advance.

    Read the article

  • Does the latest version of Ubuntu (12.04) support Unity 3D?

    - by Douglas Combs
    I just installed the latest version of Ubuntu (12.04) 64bit. I am using a Radeon HD 7750 vid card. I think I have the Catalyst driver installed correctly. But when I go to system and look at the details, it shows that my graphics is VESA:01. Does this mean I it, I didn't correctly install my driver? System Specs: MB: ASUS P7P55-M CPU: Intel i5 Quad Core MEM: 4GB DD3 VC: HIS Radeon HD 7750 (1GB DDR5) Thanks for help.

    Read the article

  • Uncontrolled Fan and Crash

    - by RobotbeatsHuman
    I don't have sensors to properly run lm-sensors. The computer will turn on but shortly there after all the fans in it will speed way up. It stays like this for a few minutes and then the computer shuts off. Tried resetting the BIO. Went to try installing a BIOs update but it wont stay on long enough for me to try that or to do a clean install. Could this be the motherboard dying? It's mainly the CPU fan that ends up going max. after a few minutes. I checked the PSU and It's a Dell Inspiron 580. If you need more system specs just le me know.

    Read the article

  • Blank Processes (?) in Natty Narwhal

    - by A Hylian Human
    I've noticed that there a seemingly blank processes (no process name, no cmdline info, only an ID), which also appear to cause my CPU to be running like crazy. My fans are going pretty much full speed and I have no idea what to do. Restarting does not help. Whenever I try to kill the process IDs, nothing happens. It's like new blank processes are continuously being created. I am really surprised that I am able to write up this question without Firefox lagging like crazy (and trust me, it's not Firefox causing the issue, as far as I can tell).

    Read the article

  • Design a Distributed System

    - by Bonton255
    I am preparing for an interview on Distributed Systems. I have gone through a lot of text and understand the basics of the area. However, I need some examples of discussions on designing a distributed system given a scenario. For example, if I were to design a distributed system to calculate if a number N is primary or not, what will the be design of the system, what will be the impact of network latency, CPU performance, node failure, addition of nodes, time synchronization etc. If you guys could present your in-depth thoughts on this example, or point me to some similar discussion, that would be really helpful.

    Read the article

  • Loud fans despite cool system under Linux (but not Windows)

    - by Sman789
    My new desktop computer runs almost silently under Windows, but the fans seem to run on a constantly high setting under Linux. Psensor shows that the GPU (with NVidia drivers) is thirty-something degrees and the CPU is about the same, so it's not just down to Linux somehow being more processor-intensive. I've read that the BIOS controls the fans under Linux, which makes sense given the high fan speeds when in BIOS as well. It's under Windows, when the ASUS AI Suite 3 software seems to take control, that the system runs more quietly and only speeds the fans up when required. So is there a Linux app which offers a similar dynamic control of the fans, or a setting hidden somewhere in the ASUS BIOS which allows the same but regardless of the OS? EDIT - I've tried using lm-sensors and fancontrol, but pwmconfig tells me "There are no pwm-capable sensor modules installed". This is after the sensors-detect command does find an 'Intel digital thermal sensor', and despite the sensors working fine in apps like psensor. Help getting this to work would likely solve the problem.

    Read the article

  • Microsoft Developers Development Laptops [closed]

    - by FidEliO
    Possible Duplicate: What should I be focusing on when building a development PC? I am a Microsoft Developer on Sharepoint and ASP.NET. I am tring to buy a new laptop since the one that I have is an old one. From my point of view, Microsoft Development tools are becomming more and more resource-consuming (I don't find a suitable reason for it though). So I thought I would go for a Lenovo U260 i-7. I do not know exactly if it is going to meet my requirement so that is why I wanted to ask specifically Microsoft Developers about the specification of CPU, RAM, and Storage Disk. Thanks in advance

    Read the article

  • Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits pendant un mois à l'occasion des TechDays

    Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits Pendant un mois à l'occasion des TechDays Ikoula avait déjà lancé une promotion sur son offre Flex'Server en proposant 500 seveurs gratuits. Aujourd'hui, l'entreprise renouvelle son opération à l'occasion des TechDays et relance son offre. Elle comprend désormais de nouvelles ressources à prix privilégié. Quatre configurations sont disponibles : de ½ à 4 CPU, de 256 Mo à 2 Go de RAM, et de 10 à 80 Go de disque dur. A l'occasion des TechDays, 1 000 Flex'Servers sont offerts pendant un mois. Après le premier mois, ces serveurs dédiés virtuels sont facturés à partir de 5.99€ HT/moi...

    Read the article

  • How can I make KDE faster in Ubuntu 12.04. It's very slow

    - by Rizwan Rifan
    I installed the kubuntu-desktop package in Ubuntu 12.04 LTS, but the problem is KDE responses very slowly. If I click on an application's icon to run it, it appears after 10 seconds and sometimes does not appear at all. It hangs all the time. The cursor is almost impossible to follow because of the lag. I have read on the Internet that Unity uses more memory and CPU than KDE. But on my PC Unity runs smoothly and KDE does not. So what should I do to make KDE as fast, responsive and smooth as Unity? My specifications are as follows: RAM: 1.5 GB (DDR2) Processor: 3 GHz Dual Core Graphics Card: Intel HD graphics with 256 MB memory.

    Read the article

  • What is a safe ulimit ceiling?

    - by Kaustubh P
    This is the output of ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is a 64bit install, and I would like to increase the max-open files from 1024 to a more heady limit such as 5000. Will that be any problem? Will it cause instability? Thanks.

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • FAQ: Creating a new LDOM domain

    - by Owen Allen
    I got a question about creating LDOM domains: "I have a Server Pool set up, and I need to create a secondary LDom domain on a machine in the pool. When I click on the machine, though, the 'create logical domain' command is grayed out. The machine still has available CPU threads and free RAM. What's going on?" This one has an easy answer. In a Server Pool, the Create Logical Domain action is under the pool's actions, rather than the individual machine's actions. This is because the Server Pool decides where to put the new domain based on the Server Pool's placement policy. So, in this case, you need to select the Server Pool in the Assets section, and then create the new domain from there.

    Read the article

  • Trouble with 12.10 lag

    - by Brennan
    Well basicly lately I have been having lag problems with 12.10. I will post my specs, but before the update to 12.10, it said that I had intel graphics. Now it says I have Gallium. My specs: *Memory: 3.9 GiB *Processor: Pentium(R) Dual-Core CPU E5500 @ 2.80GHz × 2 *Graphics: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) (used to say intel graphics) *OS Type: 32-bit *Disk: 486.1 GB The output of the command sudo apt-cache check is this: E: Invalid operation check The output of the command sudo lspci -nnk | grep -A5 VGA is this: 00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e32] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:836d] Kernel driver in use: i915 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 01) Subsystem: ASUSTeK Computer Inc. Device [1043:8445] Kernel driver in use: snd_hda_intel

    Read the article

  • Java is very slow on my laptop

    - by Ryan McClure
    I have 1.6.0_30 JRE on my 11.10 install. I have 3 GB of RAM and an Intel Core2 Duo CPU T6600 @ 2.20GHz × 2. Whenever I use my Java to play a game, the Java runs at about 4-5 FPS. When I used Windows, I found that I could get around 40 FPS. I'm not too terribly worried about this, but are there settings that I can tweak that I don't know about? If not, why is it that JRE Java can't do as much on Ubuntu as it can on Windows? Also, this may be related but I'm not too sure--My fan runs very fast when running a Java application. Is there a correlation?

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • Moving large amounts of data between shared hosts

    - by Bryan M.
    I recently acquired a client who is a photographer and was interested in moving web hosts since his current host had threatened to throw him off due to CPU spiking. The migration went fairly easily, with about 350MBs of website and media files. Then I discovered about 60GBs of client galleries he had failed to mention. I am unable to move this much data myself, since I'm capping out at about 20kb/s on the FTP connection. Has anyone encountered a situation where they needed to migrate this much data between cheap hosting? Should we contact the hosting companies about this (he is moving from Westhost to MediaTemple)?

    Read the article

  • Ubuntu 12.10 live-usb won't boot

    - by user109175
    I own an Aleutia "Tango" low power PC. It uses a Intel Atom CPU N2800 @1.86GHz processor. I know there are Linux issues with the "Cedar Trail" graphics on this hardware but I can happily run Ubuntu 12.04 using the VESA graphics driver. I wanted to try out Ubuntu 12.10 so I created a live USB under 12.04 but I am unable to get this to boot. It gets as far as the Grub screen but after that the monitor shuts down. I have tried the F6 "nomodeset" option but that doesn't make any difference. Does anyone have any knowledge of this problem? Thanks in advance.

    Read the article

  • Critical Patch Update for October 2012 Now Available

    - by Steven Chan (Oracle Development)
    The Critical Patch Update (CPU) for October 2012 was released on July 16, 2012. Oracle strongly recommends applying the patches as soon as possible. The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied. Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information. The Critical Patch Update Advisory is available at the following location: Oracle Technology Network The next four Critical Patch Update release dates are: January 15, 2013 April 16, 2013 July 16, 2013 October 15, 2013 E-Business Suite Releases 11i and 12 Reference Oracle E-Business Suite Releases 11i and 12 Critical Patch Update Knowledge Document (October 2012) (Note 1486535.1)

    Read the article

  • Graphic issue on intel945 chipset

    - by peeyush tiwari
    I used Intrepid(8.10) and my graphics used to work fine with compiz effects and all(on better resolution than 1024x768).Now I have upgraded to Precise(12.04)but I use gnome classic(with compiz effects)as my desktop but the compiz effects seem not to work, only unity 2D works. When I ran lshw -c video it gives: *-display UNCLAIMED description: VGA compatible controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:fea80000-feafffff ioport:dc00(size=8) memory:e0000000-efffffff memory:fea40000-fea7ffff Sysinfo shows: Display Resolution 1024x768 pixels OpenGL Renderer Gallium 0.4 on llvmpipe(LLVM 0x300) X11 Vendor The X.Org Foundation On SystemSettings: Memory 993.3 MiB Processor Intel® Pentium(R) 4 CPU 2.40GHz Graphics VESA: Intel(r) 82945G Chipset Family Graphics OS type 32-bit glxgears output comes to be around 100fps which used to be around 900fps in Intrepid

    Read the article

  • What is the ideal laptop for creative coding applications?

    - by Jason
    Hi, I am a creative coder using C++(cinder and OpenFrameworks) I am looking to upgrade from my MacBook, which slowed down to about 3fps this morning. My project involves particles systems and fluids reacting to audio analysis data and computer vision data in real-time. SD or HD? no biggie. I have asked many people what computer I need. Ideally, I want a MacBook Pro. But is that enough power? I've been told that I need a desktop for what I am doing though I'd rather stay portable I've been told that I should go PC linux to get the most power but I'd rather stay mac I've been told that RAM is more of bottleneck than processor speed I've been told that the Graphics Card is more important than CPU and that code optimizations such as using trees over lists, proper threading, sending tasks to the GPU make a bigger difference than the hardware!!! what's true?! what do I need? Any suggestions are greatly appreciated

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >