Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 101/572 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • Windows 7 starts getting sluggish over a few days

    - by munrobasher
    Myself and the other developer are running Windows 7 Enterprise 64 bit with 8GB RAM on different Gigabyte motherboards with Quad core Intel CPUs. Most of the time, it runs like a dream. We use VMware workstation a lot (hence the 8GB) and that works well. Except... now and then, after the PCs have been on for a few days, the whole system starts getting really sluggish doing certain tasks. The other's developer's system is far worse than mine with it taking up to a minute to launch IE. Today, mine has gone sluggish but nowhere near as bad. For example, normally when I click on a new tab in IE, it's instant. Today, there's an obvious delay. Right-clicking in this window to trigger iSpell is normally instant, right now it takes about five seconds. I've got resource monitor open on my second monitor and when I did that right-click, there was no obvious peak in CPU, disk or memory. A reboot does fix it so it does sound like a resource issue but haven't a clue what might be to blame. The two computers have similarities (same spec) but also differences (like motherboard, RAM & CPU models). So I guess the question is, any pointers on diagnosing why a PC is sluggish? What could cause such a right-click slow down in IE for example? It sounds like such a simple operation. NOTE: whilst typing this message alone, it was fine performance wise. I can click around the page no problem but right-click still is noticeable slow. Will reboot over lunch... Cheers, Rob.

    Read the article

  • Best SSD tweaks for Windows 7

    - by Nick Berardi
    I have seen many articles about tweaking an SSD, but many of them seem outdated, or too broad (read all Windows XP, Vista, and Windows 7 general tweaks). And I know that Windows 7 has been specifically tweaked for SSD by the Windows team, so I don't want to do something that was written for Windows XP in mind and end up circumventing something the Windows team has specifically designed in to Windows 7. So my question is what are the best SSD tweaks for Windows 7 that you have found to get the performance out of your drive? I hope to make a comprehensive list in the answers below so there won't be so much disinformation in the forums about what to do and what not to do. Here are a few that I see posted up on the forums alot, and some questions to get the discussion started: Disabling Superfetch. Yes or No Disabling Page File or limiting it to a really small size such as 500 MB. Disabling Indexing. Yes or No Disabling Defragmenting. Yes or No What are your thoughts do you have any that have worked for you? When providing an answer please do your best to back it up with a reason and possibly some documentation from MSDN, TechNet, or another credible source.

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • (Win7) Gets stuck with ~1% CPU. Especially with multithreading

    - by meow
    Windows 7 32 bit, up to date, Intel i7 860. (For some reason the company runs 32bit Windows everywhere.) I tried to update all motherboard drivers etc. as far as possible. I have a performance issue with a machine which appears in connection with multithreading (or so I think). As an example (and where I most often see it, but it appears on other programs as well): ProteoWizard is a file conversion tool for mass spectrometry files. I can add a list of files and it will attempt to process up to 8 files in parallel (quadcore x 2 threads/core). If I choose 1 to 6 files, I start the process and it goes straight through. If I have =7 files in the queue, conversion goes to ~20%, then gets stuck for 15 seconds, then continues again, always in "chunks" of a few % before getting stuck again. During the time the process is stuck, CPU is at 1%. RAM is not limiting, it is maybe at 70% or so and not going up. I don't get the same problem on other, even slower machines. The computer gets also stuck at 1% CPU doing nothing on other occasions, but for multithreading it is most frequent. Where should I look for the problem?

    Read the article

  • TortoiseSVN client slows Explorer to a crawl in Windows XP running in Parallels

    - by Cory Larson
    I thought I'd make my first SuperUser question relatively simple, though it's the kind of question that may not get many responses as I'm not directly involved with the issue. A colleague does his development in Windows XP running in Parallels on his Mac. We've just migrated our VSS repository to SVN, and we've gone with TortoiseSVN as our client of choice with the Ankhsvn plugin for Visual Studio. On his XP instance, after installing TortoiseSVN, browsing through folders using Explorer is extremely slow; about 15 - 30 seconds before the contents of the next folder displays. It's the slowest when opening My Computer. Once he reaches a folder that contains the working content of an SVN project, Explorer behaves quickly again as expected. It seems that TortoiseSVN may be spending a bunch of time searching subfolders for stuff so it can do its icon-overlay thing, but that's just a guess. I've used TortoiseSVN for years on both XP and Vista on far less powerful machines without any issues with Explorer, so I'm attributing the slowness to it being run in a VM, though that may not be the actual issue. So has anyone encountered similar performance issues, and/or know of a fix? Keep in mind that any requests to make changes to his configuration will need to be communicated and thus my response time might be slow. Thanks everyone!

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Ubuntu 12.04 VirtualBox on powerful W7 quite slow

    - by wnstnsmth
    I own a Thinkpad T420s with 8GB RAM, 160 GB SSD and a quite fast i7 processor. Summa summarum a very fast computer that works perfectly. Now, I am not very impressed by the performance of my Ubuntu 12.04 virtual machine running on VirtualBox 4.1.18. I assume that Virtual Machines are always a bit slower than the guest system, still I think it should be more performant given the hardware settings I give it: 4096 MB RAM 1 CPU without CPU limitation (I would like to give it more but then it does not seem to work - I am not experienced in this maybe somebody could give me advice on this too) Activated PAE/NX, VT-x/AMD-V and Nested Paging 96 MB Graphics Memory (no 2D or 3D acceleration) ~ 14 GB disk space, currently about 7 GB are used Maybe I misconfigured something, could you give me a hint please? Thanks! Edit: What I mean by slow is that for example switching tabs in the browser (whether FF or Chrome) only goes with a 0.5s delay or something, as well as switching application windows and/or double-clicking applications in the dock to get all open windows.. opening Aptana takes about a minute whereas opening something like Photoshop on the guest system takes 5 seconds

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • My system is always disk-bound (the disk light is always on). Why is this?

    - by Scoobie
    I have been given a laptop by the good folks at my company on which to do my work (Java development). I usually use eclipse as my primary development platform. The laptop is a Dell D830 and runs Windows 7 - 32 bit. Although the processor supports a 64 bit instruction-set, licensing limits me to running the 32 bit OS. The HDD is a WD1600BEVT (Western Digital). I have noticed that my disk is always very slow. Windows start up is usually pretty quick, however as soon as I log on, my disk light stays on and usually, the laptop takes about 4 minutes (after logging in -- immediately upon getting the prompt to press Ctrl + Alt + Del to log in) before it's usable. Questions: Is this expected behavior? What can I do to examine the disk and determine the cause of the problem? What can I do to improve my disk's performance? Any optimizations you may be able to suggest? Other Questions: Some have suggested running Process Monitor (from sysinternals), but how would i get the log since start up? Instead of trying to fix this myself, should I simply push this onto the system administrator? Thanks all.

    Read the article

  • Comparing 128MB GeForce 8600GT and 512MB Radeon X1650

    - by Synetech inc.
    Hi, I'm trying to determine which is the better of these two video cards: 128MB Nvidia GeForce 8600GT card while the other has a 512MB ATI Radeon X1650 card. Both cards are the upper-level mid-range versions of their respective series. On the one hand, the ATI has substantially more VRAM, but the Nvidia supports D3D 10 and SM4.0 as opposed to D3D 9.0c/SM3.0 that the ATI supports. Also, I have always heard better things about Nvidia cards compared to ATI cards. I'm trying to find some advice on which one is better, but I can't find any actual comparisons or anything for these specific cards (the comparisons I can find are only similar ones like the X1650 Pro or 8600GT PCI-E), so I figure that what I need to know is whether the extra VRAM is that important. Looking at the ATI table and the Nvidia table seems to indicate that the Nvidia is better, but then again, the Nvidia table also says that the GeForce 8600GT is a PCI-E card with at least 256MB even though the card in question is an AGP with 128MB. (:-?) (It looks like the ATI card is not supported in Windows 7 while the Nvidia card is, which I suppose is also a factor, though not quite as immediately relevant as performance.) Any ideas? Thanks a lot.

    Read the article

  • MySQL performance

    - by kapil.israni
    Hi, I have this LAMP application with about 900k rows in MySQL and I am having some performance issues. Background - Apart from the LAMP stack , there's also a Java process (multi-threaded) that runs in its own JVM. So together with LAMP & java, they form the complete solution. The java process is responsible for inserts/updates and few selects as well. These inserts/updates are usually in bulk/batch, anywhere between 5-150 rows. The PHP front-end code only does SELECT's. Issue - the PHP/SELECT queries become very slow when the java process is running. When the java process is stopped, SELECT's perform alright. I mean the performance difference is huge. When the java process is running, any action performed on the php front-end results in 80% and more CPU usage for mysqld process. Any help would be appreciated. MySQL is running with default parameters & settings. Software stack - Apache - 2.2.x MySQL -5.1.37-1ubuntu5 PHP - 5.2.10 Java - 1.6.0_15 OS - Ubuntu 9.10 (karmic)

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >