Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 137/1209 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Defragment / Performance Monitor without Task Scheduler

    - by mjaggard
    My organisation has a policy of disabling Task Scheduler on all servers and workstations (don't ask, I tried once to wrestle the pig). I need to collect performance stats using Data Collector Sets in Windows 7 or Windows 2008 but the Performance Monitor interface requires Task Scheduler to be running. Is this possible because I'm not trying to schedule anything (except the collection of WMI information every 15 seconds but I doubt it hands that task off to the task scheduler)? Is there any way to trick it into thinking Task Scheduler is running? If not, is there any way to temporarily override the group policy to allow Task Scheduler to run? I've found that most group policy can be overridden in this way by an Administrator by editing the registry. On exactly the same vein, I want to defragment a hard disk on one of my workstations, but I can't get it to start because of the dependancy on Task Scheduler - is it possible to overcome this?

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • Drupal + Zen - not proper layout/no graphics

    - by Luma
    Hello, I installed Drupal (both using Fantastico, then started over from scratch) and Drupal works great except when I setup the Zen theme (following the instructions on the website and in the readme-first.txt) the theme does not show up properly. it has no graphics except for the tiny drupal logo way at the bottom. the rest is text. I have uploaded the theme to the /public_html/sites/all/themes folder and it does show up in themes in the administer section so I select enable and default. permissions on this folder are 755. any ideas? I checked the Drupal forums and someone is having the same problem but no one replied (this was in April) so I figured I would have a better shot on this awesome website. thanks. Luma

    Read the article

  • Need advise on choosing aws EC2

    - by Mayank
    I'm planning to host a website where in the first phase I would target 30,000 users. It is in php and runs on Apache server. I'm assuming 8,000 users can be online in worst case scenario and 1000 of them will be uploading photographs. A photograph will be resized to around 1MB at client side and one HTTP request is uploading only one photograph. My plan: 2 Small EC2 instances to run Apache httpd 2 Small EC2 instances to DB (Postgresql). I to write data and other its read replica. EBS volumes for DBs Last, Amazon S3 for uploaded photographs. My question here Is Small EC2 instance more than what I require. I mean should I go for micro Is 8000 simultaneous user a right no. (to decide what EC2 instance to choose) for a new website Or should I go for Small instance so to make it capable of spikes

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Peforming an Audit for SQL Server 2008

    - by Nai
    Hi all, Do you guys have any good step by step type links for performing an SQL Server 2008 Performance Audit? I know Brad McGehee has written extensively on this but for SQL Server 2005 over at http://www.sql-server-performance.com. But are any such articles for SQL Server 2008? Thanks!

    Read the article

  • How can ASP.NET's "Request Wait Time" be 0 when "Requests Queued" is consistently in the hundreds?

    - by ondrej
    I'm curious why Performance Monitor claims I always have a few hundred ASP.NET 3.5 requests "queued". The "Requests Queued" "ASP.NET v2.0.50727" performance counter is hovering in the few-hundred range despite the fact "Request Wait Time" is consistently 0. If each and every request never waits even a fraction of a millisecond, how could it be in the queue? The "ASP.NET Apps v2.0.50727" counters for "Requests In Application Queue" and "Request Wait Time" are always 0.

    Read the article

  • increase performance of virtual machines on the 27" imac

    - by evan
    I'm using a 27" iMac (i7, 8GB RAM) at work and normally run two or three virtual machines at the same time, which hurts the performance of each virtual machine. I've learned on these forums the best way to increase virtual machine performance (aside from RAM) is to have them running on a separate hard drive from the one the OS is on. Of course with the iMac you can only have one hard drive and not even an SAS or solid state drive (well you could probably take it apart and put one in yourself but I wouldn't be permitted to do that). That being said, do you think it would help to run one or more virtual machines from a firewire external drive (or a usb 2.0)? Thanks for your input!

    Read the article

  • How can I receive more traffic? My VPS fails!!!

    - by Vic
    I have a web site - photo gallery. About 400 photos. Site on Gallery 3. mySQL. Hosted on VPS from myhosting.com (CPU 1792 MHz, 2048 MB RAM). Everything seems to be ok, but there is one big problem. Once traffic reaches ~ 20 people (online) - website start loading really really slow. Actually website can't be loaded about 30-60 sec. What should I do? Buy more RAM / CPU on the same VPS? Move to a dedicated server or maybe myhosting.com just sucks? What do you recommend?

    Read the article

  • MD3200i Slow Performance and Queue Depth

    - by Caleb_S
    Read performance on our SAN is slow under certain workloads. When we compare this to some local storage, we find the local storage performing 2x as fast. The SAN performs well with a high Queue Depth, and poorly with a low queue depth. However, the local storage performs well with a low Queue Depth. I'd like to know the reason for this occurring and find out what the specific limiting factor is in this situation. MD3200i iSCSI SAN ($15,000) 6 x 600GB 15k SAS RAID5 6 x 2TB 7.2k NLS RAID5 XCOPY /j Benchmark: (Slow) 15k Array - 71MB/s (Queue Depth 1) 7.2k Array- 71MB/s (Queue Depth 1) Robycopy /MT:32 Benchmark: (Fast) 15k Array - 171MB/s (Queue Depth ~12) 7.2k Array- 128MB/s (Queue Depth ~12) , , Read Performance on a Local controller is fast under the workload the SAN is slow at. , HighPoint 2230 RAID Controller ($600) 4 x 1TB 7.2k SATA RAID5 XCOPY /j Benchmark: 7.2k Array - 145MB/s (Queue Depth 1) (appears to max out the SATA bus)

    Read the article

  • Tools to monitor guest OS performance in vSphere

    - by Quick Joe Smith
    I am looking for some tool or way to retrieve performance data from guest VMs running under vSphere 4.1. I am currently interested in the 4 basic metrics: CPU(%), Memory(%), Disk availability(%) & Network utilisation(Kb/s). The issue I have is that all of vSphere's performance data is from a ESXi host perspective (active, shared, consumed, overhead, swapped etc.) which is far removed from the data from the VM's own perspective. For instance, I have a Windows server VM idling, using around 410MB (~25% of its allocated 2GB) as reported by Task Manager, and this is the value I'm after. vSphere's metrics seem unable to arrive at this figure by any reliable and repeatable means. Is anyone aware of tools that can obtain this kind of data? The simpler, the better.

    Read the article

  • Cassandra on heterogeneous servers

    - by happy-coding
    I am currently running 4 cassandra nodes with the following hardware in a Apache Cassandra cluster: AMD Athlon 64 X2 6000+ 8G RAM 750G hard disk It shows not such a good writing performance and a really bad read performance with sometimes also timeouts. I was wondering if it makes sense to add 2 nodes with a different hardware (8 CPUs and more RAM) to improve this. Or does a cassandra cluster works best with the same hardware in every node? Thanks & best regards

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • TCP/IP performance tuning under KVM/Qemu

    - by vpetersson
    With more and more companies switching to public cloud services, I'm curious what you guys' thoughts are on TCP/IP tuning in the cloud. Is it worth bothering with? Given that you don't have access to the host-server, you're somewhat limited I presume Let's say for the sake of the argument that you're running three MongoDB-servers in a replica-set on FreeBSD or Linux that all sync over an internal network. I'd also be curious if anyone made any actual performance benchmarks to back up their arguments. I benchmarked the various network drivers available for KVM/Qemu here, but I'm curious what the gurus here suggest to tune further. I started playing around a bit with the tuning-recommendations as suggested over here, but interestingly enough I saw a decrease in performance, rather than an increase, but perhaps I didn't fully understand the tweaks. Update: I did a few more benchmarks and posted the result here. Unfortunately the result wasn't really what I expected.

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • php registration form - limit emails [closed]

    - by Daniel
    i want to restrict certain emails to my website. an example would be that i only want people with gmail accounts to register to my website. { /* Check if valid email address */ $regex = "^[_+a-z0-9-]+(\.[_+a-z0-9-]+)*" ."@[a-z0-9-]+(\.[a-z0-9-]{1,})*" ."\.([a-z]{2,}){1}$"; if(!eregi($regex,$subemail)){ $form->setError($field, "* Email invalid"); } $subemail = stripslashes($subemail); } this is what i have so far to check if its a valid email.

    Read the article

  • iSCSI SAN RAID 10 Performance -- Poor Read, Good Write

    - by Litzner
    I have a EqualLogic PS4000 SAN unit with the latest firmware, setup in RAID 10. I have 3 2TB Volumes on the SAN shared out via iSCSI on 2 eth ports on two different subnets. I have moved a test server over to this newly setup SAN, and my testing is showing me a problem. I am getting dismal read performance in everything except a test with 32 queue depth (see attach image) Write performance seems to be right about where it should be. I have tried MPIO on and off, on was slightly better but not much.

    Read the article

  • Addons which actually make Firefox run faster?

    - by Zombies
    I would like to know of addons which actually enhance firefox's performance, both intentionally and unintentionally. I find that firefox tends to have major performance issues with certain websites. These websites tend to have a fair amount of javascript and css, and probably a large dom tree which may even be growing dynamically through javascript too. The worse offenders are those with heavy javascript, use heavy facebook integration, websites with non performant javascript, excessive javascript and websites with too many advertisements.

    Read the article

  • What is the best way to secure MySQL data on a laptop *without* whole-disk-encryption?

    - by GJ
    I need to have the mysql data on my laptop stored in an encrypted state so that in case of the laptop being lost/stolen it will extremely difficult to recover the data without the password. I don't wish to use whole disk encryption, due to the performance impact it will have on other disk-intensive programs' usage. What could be the ideal solution for me balancing security and performance? Thanks!

    Read the article

  • web spidering/crawling, can i do it or just search engines?

    - by bboyreason
    i already had a question answered about web-scraping with wget. but as i read a little more, i realize i may be looking for a web-crawling program. particularly the part about web-crawlers being able to get specific data like links or, in my case, products. all of the products on my site have the following naming convention, website.com/uniqueAlphaNumericID.html as far as i know, no dynamic content generation is being used and only one page per one item in the above format. should i just be thinking about: wget website.com | grep *.html or should i be looking into spiders/crawlers?

    Read the article

  • ubuntu 12.04 - keep getting "Server not found" for some websites

    - by android developer
    ever since last week , i've noticed that many websites cannot be accessed , and it doesn't matter if i use firefox or chromium as a web browser . as an example of such a website is: http://tutorials-android.blogspot.co.il/2011/05/layout-animation-in-android.html all i get is a "Server not found" error page . sometimes after a few refreshes it works just fine . i've checked it on a windows OS machine that is connected to the exact same LAN network , and the website is shown just fine . i've also checked the /etc/hosts file and it doesn't contain anything suspicious . what is going on? how can i fix it?

    Read the article

  • Creating a server of an xp computer?

    - by Sweppi
    Hey, I want to create a server for my website since what I'm going to have on the website is far more than I can store web hotel. So I have this xp computer I'd like to use as a server. I did have a guide of this but unfortunately the link broke. But what i understood was that I'd be in need of a windows 2000 disc for this to work. Do you have any good tutorials/guides about how I can remake my xp computer to a server? The computer is running windows xp home edition Service Pack 2 Thank you in advance!

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >