Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 15/530 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • Performance Tuning with Traces

    - by Tara Kizer
    This past Saturday, I presented "Performance Tuning with Traces" at SQL Saturday #47 in Phoenix, Arizona.  You can download my slide deck and supporting files here. This is the same presentation that I did in September at SQL Saturday #55 in San Diego, however I focused less on my custom server-side trace tool and more on the steps that I take to troubleshoot a production performance problem which often includes server-side tracing.  If any of my blog readers attended the presentation, I'd love to hear your feedback.  I'm specifically interested in hearing constructive criticism.  Speaking in front of people is not something that comes naturally to me.  I plan on presenting in the future, so feedback on how I can do a better job would be very helpful.  My number one problem is I talk too fast!

    Read the article

  • JavaOne 2012 LAD Session: The Future of JVM Performance Tuning

    - by Ricardo Ferreira
    Hi folks. This year, together with the Oracle Open World Latin America, happened another edition of the JavaOne Latin America, the more important event of Java for the developers community. I would like to share with you the slides that I've used in my session. The session was "The Future of JVM Performance Tuning" and the idea was to share some knowledge about JVM enhancements that Oracle implemented in Hotspot about performance, specially those ones related with GC ("Garbage Collection") and SDP ("Sockets Direct Protocol"). I hope you enjoy the content :)

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Improving terminal server performance for a specfic app

    - by Matt
    We have a windows 2003 terminal server running 2X application load balancign that is hosting a client's application that is accessed by around 50 users. Each user has there own database. The database is a file based database. The application is developed under Delphi so I think the database may be BDE based. As you can imagine, there is probably quite a lot of disk i/o. Here are some of the perfmon settings. Logged in users (average) 20 - 25 CPU Utilization (average) 80 - 100% Disk Queue Length (average) 1.6 % Disk time (average) 111 Page faults/sec (average) 1400 The application takes on average about a minute to load up. As usual, the budget is tight. Is there basic windows performance tuning tips that people can recommend to improve things before we fork out on more RAM etc. Server is a 2.8GHz Xeon with 3GB of RAM.

    Read the article

  • Getting Optimal Performance from Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Performance tuning and optimization in E-Business Suite environments can involve many different components and diagnostic tools.  Samer Barakat, Senior Architect in our Applications Performance group, held an OpenWorld 2013 session that covered: Performance triage, analysis and diagnostic tools Optimizing the E-Business Suite application tier, including Concurrent Manager Optimizing the E-Business Suite database tier Optimizing the E-Business Suite on Real Application Clusters (RAC) E-Business Suite on engineered systems, including Exadata and Exalogic Optimizing E-Business Suite data management, including archiving and purging  The Applications Performance group works with the world's largest E-Business Suite customers to isolate and resolve performance bottlenecks. This team has helped tune the E-Business Suite environments of world's largest companies to handle staggering amounts of transactional volume in multi-terabyte databases.  This group also publishes our official Oracle Apps benchmarks, white papers, and performance metrics. This is an essential set of tips and techniques that all EBS sysadmins and DBAs can use to improve the performance of their environments: Getting Optimal Performance from Oracle E-Business Suite (PDF, 1.7 MB) OpenWorld 2013 presentations are only available for approximately six months -- until ~March 2013.  Download this one while it's still available. Related Articles E-Business Suite Technology Sessions at OpenWorld 2013 OAUG/Collaborate Recap: Best Practices for E-Business Suite Performance Tuning

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Improving Performance of RDP Over LAN

    - by Jared Brown
    Architecture: A deployment of 6 new HP thin clients (Windows XP Embedded) with TCP/IP access to several new HP servers (Windows 2003 Server). Each thin client is connected over fiber optic to a Gigabit Cisco switch, which the servers are connected to. There are 10/100 Ethernet to fiber converter boxes on each end of the fiber cables. Problem: Noticeable lag over RDP while using the Unigraphics CAD package. 3D models take .5 to 1 second to respond to mouse actions. Other Details: Network throughput on each thin client's RDP session is 7288 kbps. RDP connection settings - color setting: 15k, all themes, etc. turned off. Local and remote system performance stats are well within norms (CPU, Memory, and Network). Question: Are there newer versions of terminal services or RDP I can use on my existing OSes? Are there compression algorithms, etc. that are well suited for a high-bandwidth LAN? Are there valid alternatives that will yield higher performance (i.e. UltraVNC with drivers installed)? Are there TCP/IP tuning options I can exploit?

    Read the article

  • Low CPU performance with low usage and clock - Windows 8.1

    - by Daniele
    I recently deleted everything from my PC and reinstalled Windows 8.1 from scratch. When I first booted into Windows everything was extremely slow though the CPU usage was very low (about 1%). After installing some drivers the problem seemed to be solved, I was able to use my PC normally. Today I installed a game and I noticed a strange behavior: the game was playable but the performance worsened more and more in the time. This is the situation BEFORE opening the game (normal): This is AFTER some minutes inside the game (low CPU usage and clock): Some information about my system: PC: Sony Vaio S13 (SVS13A1C5E) OS: Windows 8.1 CPU: Intel Core i7-3520M 2.90GHz GPU(1): Intel HD Graphics 4000 GPU(2): NVIDIA GeForce GT 640M LE I tried searching for new drivers and other solutions but noting worked and I don't know what is the cause. I did not checked the temperatures but the fans are not running fast and the PC does not look overheated. Update: Max CPU Temp: 66°C, Max GPU Temp: 61°C The strange thing is that the GPU load is 99% (GPU-Z) and the fan is almost silent. Update 2: I had troubles with Sony Vaio software, I can't get the FN keys and the STAMINA/SPEED switch to work (it is a physical switch to enable/disable the Nvidia card and change the Power Profile). I'm saying this because I remember that before reinstalling Windows there was an option in the Vaio Control Center (now it is not there anymore) that allowed me to choose from something like "priority to performance (ventilation)" or "priority to silence". The current behavior looks like a "priority to silence", but I can't get the stamina-speed switch to work and so I don't see similar oprions in the Vaio Control Center. I don't know if the problem is related to this.

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • Very slow write performance on Debian 6.0 (AMD64) with DMCRYPT/LVM/RAID1

    - by jdelic
    I'm seeing very strange performance characteristics on one of my servers. This server is running a simple two-disk software-RAID1 setup with LVM spanning /dev/md0. One of the logical volumes /dev/vg0/secure is encrypted using dmcrypt with LUKS and mounted with the sync and noatimes flag. Writing to that volume is incredibly slow at 1.8 MB/s and the CPU usage stays near 0%. There are 8 crpyto/1-8 processes running (it's a Intel Quadcore CPU). I hope that someone on serverfault has seen this before :-(. uname -a 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Interestingly, when I read from the device I get good performance numbers: reading without encryption: $ dd if=/dev/vg0/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 68.8951 s, 95.1 MB/s reading with encryption: $ dd if=/dev/mapper/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 69.7116 s, 94.0 MB/s However, when I try to write to the device: $ dd if=/dev/zero of=./test bs=64k 8809+0 records in 8809+0 records out 577306624 bytes (577 MB) copied, 321.861 s, 1.8 MB/s Also, when I read I see CPU usage, when I write, the CPU stays at almost 0% usage. Here is output of cryptsetup luksDump: LUKS header information for /dev/vg0/secure Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 2056 MK bits: 256 MK digest: dd 62 b9 a5 bf 6c ec 23 36 22 92 4c 39 f8 d6 5d c1 3a b7 37 MK salt: cc 2e b3 d9 fb e3 86 a1 bb ab eb 9d 65 df b3 dd d9 6b f4 49 de 8f 85 7d 3b 1c 90 83 5d b2 87 e2 MK iterations: 44500 UUID: a7c9af61-d9f0-4d3f-b422-dddf16250c33 Key Slot 0: ENABLED Iterations: 178282 Salt: 60 24 cb be 5c 51 9f b4 85 64 3d f8 07 22 54 d4 1a 5f 4c bc 4b 82 76 48 d8 a2 d2 6a ee 13 d7 5d Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • graphics performance better on battery?

    - by Scott Beeson
    Anyone have any idea why my laptop would perform (considerably) better while on battery than while plugged in? It's a Dell Latitude E6420 with Windows 8 Pro. I tried mirroring all the settings in the selected power plan from "On battery" to "Plugged In" and that didn't help. I then just restored the defaults for all power plans (balanced and high performance). I'm still seeing the same results. The best example where it is most noticeable (don't laugh) is Sim City Social in Chrome. I'm probably seeing a performance increase of 5x on battery versus plugged in. This is easily reproducible too. I'm very confused. Could it be caused by dust? The laptop isn't that old and there is no visible dust. I'm not going to take it apart to check the insides as it's a corporate laptop. Could it be overheating? Battery Sim City Social: 68 degrees max Civ V: 77 degrees max Charger Sim City Social: 68 Civ V: did not test See answer below... I'm retarded

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • CloudFlare dashboards empty, or performance issues

    - by Katafalkas
    I wanted to test CloudFlare performance so I set my image gallery domain on it and started testing. I have added PageRules for caching. And chose the Security: Essentially Off. I checked NS check tools and they say that my domain name is propagated with CloudFlare. For testing purpose I created a link that loads 200 images from that server, and was using loads.in website to determine how much it is faster. After trying few regions, I noticed that there were no improvement in loading speed. So I looked up the dashboards, and it was empty. I am not sure if I am doing something wrong, or made some error in my setup, or it takes few days to start caching or working properly, but at the moment - after a day of testing - dashboards are empty. Also the NS check tools sais that all name servers are propagated to CloudFlare and working fine. So I assume I got a bad performance because it is simply not working. I sent a letter to CloudFlare support team, but did not get any straight answer. So essentially my question is: Anyone has any experience with CloudFlare ? How long does it take for it to start caching static content to CDN ? Or there is simply something I am doing wrong ?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >