Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 143/267 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • World Wide Publishing Service very slow to restart on IIS? Why?

    - by StacMan
    Every now and then, we look to restart our IIS server by restarting the "WWW Publishing Service". On most systems this would usually only take a minute or two, but this can often take up to 10 minutes to stop the server and restart. Does anyone know any way to work out what is taking so much time to stop the service? After reading up around the net I've learned this may be due to locked resources used by users and/or lower-level IIS cached items being recycled. But I cant seem to work out where I can validate if this is true on not. Also if anyone knows how to fix or speed this up, that would be excellent. We have a large codebase which contains over 280 aspx pages across the site. Our main domain contains about 100 aspx pages whilst the subdomains contain 15 or 20 each. Some specs: Code is written in C#; runs on .Net framework 2.0 Server Windows Web Server 2008 IIS7; DB running SQL Server 2008 Standard

    Read the article

  • How to set WAN side buffers for WRT54GL running Tomato Firmware

    - by Vickash
    I've recently set up a machine running m0n0wall to try and fight buffer bloat and do some traffic shaping. It was more convenient (geographically speaking) to connect the cable modem directly to my old WRT54GL, then pass everything to the m0n0wall machine and have that do the real routing work. It took a bit of work, but it's working pretty well. I have a cable connection. I have m0n0wall set up to utilize only 90% of the specified speed of my subscription, which is fine. But I've noticed that at certain times of the day (possibly when my true bandwidth drops below that 90%), there's more latency if the connection is used heavily, and traffic shaping doesn't seem to work as well. I suspect this is caused by the buffers on the WRT54GL still being unnecessarily large. If the connection is working as expected, they won't get filled, but in times of reduced bandwidth they would. Does anyone know the command I need to execute, on the WRT54GL running Tomato Firmware, to reduce the buffers on the WAN interface to the minimum size possible?

    Read the article

  • Ultrium 3 tape drive shoe-shining, 3Mb/s: and it's not the cable

    - by mowsala
    I have a HP 960 Ultrium 3 tape drive. Since I got it, (second hand, £90) I've been experiencing shoe-shining. Writing with tar in Linux, I average about 3Mb/s write speed. I've tried replacing both the SCSI card and the cable now, both of which made no difference at all. A curiuos observation I have made is that the write rate is not consistent. Sometimes it will write for over a minute without shoeshining, but more often, just a few seconds. I've also tried several tapes, different source drives, and even writing from Windows Backup, to no avail.

    Read the article

  • linux to linux, 10TB transfer?

    - by lostincode
    I've looked at all the previous similar questions, but the answers seemed to be all over the place and no one was moving a lot of data (100GB != 10TB). I've got about 10TB that I need to move from one raid to another, gigabit net, XFS file systems. My biggest concern is having the transfer die midway and not being able to resume easily. Speed would be nice, but ensuring transfer is much more important. Normally I'd just tar & netcat, but the raid I'm moving from has been super flaky as of late and I need to be able to recover and resume if it drops mid process. Should I be looking at rsync?

    Read the article

  • Options to optimize lotus notes 8.5.X

    - by Jakub
    Has anyone found actually useful optimization methods for the bulky, fat, eclipse giant, nuissance that is Lotus Notes 8.5? I want it to be fast, and not eat up system resources like crazy while I run it ALL day (as it is my company's corporate mail / cal / scheduling solution). I've tried various hacks for the JVM heap size (if I recall correctly). None really bring a performance improvement. I have a dual core cpu, if that helps (I tried going the route of optimizing JAVA for 2 cores in hopes it would work, but seen no speed improvement). Notes is just sooo bloated, anyone have any suggestions to optimize/mod this thing so it is more responsive and less of a resource hog. Note: I don't want to switch to the web version, or the standard stripped down versions, I am aware of those, I just cannot since we don't run those internally for the company.

    Read the article

  • How to quickly empty a very full recycle bin?

    - by Pekka
    I have deleted about half a million files from a folder, and didn't think to press Shift in order to delete them completely straight away. Now they're clogging my recycle bin, and Windows claims it will take 4 hours to empty it - it claims to do about 68 files per second. Is there some magic or an alternative method that can speed this up? Bounty - I'm starting a bounty. The files are still in my bin, as there was no pressing need to get rid of them and this way, I can try out the suggestions presented. I am, however, looking for a way that does not include hard-deleting the contents of the RECYCLER folder - I'm sure that would work, but it feels a bit unclean to me.

    Read the article

  • pfSense + DDoS Protection

    - by Jeremy
    I run a gaming community on a colo with a 100Mbps port. I want to buy a very cheap 35 dollar server with the same 100Mbps port, and run pfSense to use as a hardware firewall. I'm dealing with a bunch of 14 year old kids that have access to botnets, so it can become a bit necessary to get something like this. My overall question, is using pfSense on a cheap identical datacenter/port speed server worth it to actually block DDoS attacks? A bit more into detail since I assume you will ask this, the attacks we receive are normally around 1Gbps. We currently run CentOS using CSF Firewall, and even when using a software firewall, we block 500Mbps UDP floods, or just generic attacks pretty easily. Thanks, - Necro

    Read the article

  • New graphics card (GTX 760) slowing down entire PC

    - by Cayetano Gonçalves
    My new graphics card is making my PC totally unusable. It boots up really slowly, and when the windows screen comes on, the mouse lags really far behind. Nothing opens at a normal speed. However, when I put in my 5-year old graphics card, it all works fine. I'm currently using a Foxconn Renaissance LGA 1366 Intel X58 ATX Motherboard, Intel Motherboard Intel Core i7-950 Bloomfield 3.06GHz LGA 1366 130W Processor, and a EVGA SuperNOVA 850G2 80PLUS Gold Certified ATX12V/EPS12V 850W Power Supply . I know it can't be the power supply, because I just bought it today to try to fix the problem. I've also installed the newest version of BIOS available for my motherboard. I've also seen extreme variations in CPU while the new graphics card is in, and when the old graphics card is installed, it is much calmer. Any thoughts?

    Read the article

  • How to Set Linux Bonding Interface to Gigabit

    - by Kyle Brandt
    I have enabled Linux active backup mode bonding. Each interface is a gigabit interface, but the bond interface seems to end up at 100 Megabit: bonding: bond0: Warning: failed to get speed and duplex from eth1, assumed to be 100Mb/sec and Full. ... bnx2: eth0 NIC Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON ... bonding: bond0: backup interface eth1 is now up ethtool apparently can't provide info on bond: sudo ethtool bond0 Settings for bond0: No data available So does this mean I am operating at 100 or 1000 Megabit (My guess is 1000)? If it is only 100, what options in the ifcfg scripts or the modprobe bonding options do I need to sett to make it 1000?

    Read the article

  • 4096 and 8192 block size read slower than write? by using lsi 9361-8i RAID10

    - by Min Hong Tan
    is it possible that 1024 and 2048 block size read speed is faster than 4096 and 8192 block? I'm using lsi 9361-8i with RAID 10 , with 8 x Kingston E50 250G. result: 1024 = Write: 2,251 MB/s Read: 2,625 MB/s 2048 = Write: 2,141 MB/s Read: 3,672 MB/s 4096 = Write: 2,147 MB/s Read: 231 MB/s 8192 = Write: 2,147 MB/s Read: 442 MB/s is there any possible? and below is the reading when i simply want to test out the RAID 10 function and disaster test by taking out one of the 250G harddisk. the result is different like below: Result: 1024 = Write: 825 MB/s Read: 1,139 MB/s 2048 = Write: 797 MB/s Read: 1,312 MB/s 4096 = Write: 911 MB/s Read: 1,342 MB/s 8192 = Write: 786 MB/s Read: 1,204 MB/s and the result for 4096 and 8192block are different? can any one explain to me is it normal? or I need to do some tuning/configuration? will it affect my host linux performance?

    Read the article

  • Memcached - doesn't seem to be working

    - by Trev
    my local.xml <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> <prefix>MAGE_</prefix> <memcached> <servers> <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> </server> </servers> </memcached> </cache> /var/cache is still filling up memachced is running memcache 2685 0.0 0.3 351888 26152 ? Sl 08:07 0:19 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 How do i know its working? I notice no speed increases.

    Read the article

  • Why use multiple partitions on a rhel server?

    - by Jakobud
    I'm about to reformat and reinstall CentOS onto an old server. The server runs on a modest 30 node small business network and has a variety of responsibilities including MySQL, a Samba share, DHCPd & SVN/Trac. The old sysadmin had this server setup with almost a dozen different partitions for various things. I'm trying to understand what the advantages of multiple partitions are as opposed to a just one filesystem at /. Speed? Flexibility? Security? It seems like if you misjudge the necessary size for any given partition and it ends up filling up too fast, it requires a sysadmin to go in and expand the partition, etc... Seems like it would be easier if everything was just one flat / filesystem. But I'm sure there are some advantages I'm not aware of. The server is currently running a handful of HDDs raided to ~2TB (raid 0).

    Read the article

  • Performance difference between MacBook Pro (2.8 GHz) vs Air (1.7 GHz)?

    - by jonathanconway
    I'm comparing these two Apple laptops: MacBook Pro (13", 2011 model): 2.8GHz dual-core Intel Core i7 processor with 4MB shared L3 cache 4GB (two 2GB SO-DIMMs) of 1333MHz DDR3 SDRAM AMD Radeon HD 6770M graphics processor with 1GB of GDDR5 memory on 2.4GHz configuration MacBook Air (13", 2011 model): 1.7GHz dual-core Intel Core i5 with 3MB shared L3 cache 4GB of 1333MHz DDR3 onboard memory Intel HD Graphics 3000 processor with 384MB of DDR3 SDRAM shared with main memory There's definitely a gap between them in terms of CPU speed and graphics, but what practical difference would this make on a day-to-day basis? On the one hand, I love the sleek, thin appearance of the Air. On the other hand, I don't want a machine that's going to be dog-slow when doing tasks such as running Virtual Machines, dual-booting to Windows and running multiple instances of Visual Studio, and maybe some light gaming. Is there going to be a major difference that makes the MacBook Pro a more attractive purchase?

    Read the article

  • GIMP Slow Startup

    - by muntoo
    Is there any way to speed up GIMP's startup time on Windows Vista Home Premium 32-Bit 1.6 [Dual] Intel Processors? On XP [different computer], it loads in less than 3 seconds. On Vista, it takes 20 seconds: 2 Seconds (other - fonts, brushes, etc) 18 Seconds (extension-script-fu) It just freezes at extension-script-fu. Looking at ProcessExplorer (or Task Manager, whatever), I see that it's not taking any CPU. EDIT: it does seem to be taking 50% of the CPU. It gets stuck for about 18 seconds, then starts working again, and the actual GIMP program pops up [...finally]. I have the latest stable version running (I think). I tried it with XP SP2 Compatibiliy mode and/or Run As Administrator, but that didn't help. EDIT: One way would be to disable script-fu. Does anyone know how to disable it at startup? (NOTE: Just wanted to point out that the title and the tags are the same. :D )

    Read the article

  • apache2 and php slow first load on Ubuntu VPS - something like mysqltuner but for apache?

    - by talkingnews
    Ubuntu 10.10 64 bit VPS, 512Mb dedicated RAM. Mysql tuned so that sqltuner is completely happy. Used RAM never above 350Mb out of the 493 available. Load never exceeds 1.04 or so. httpd.conf tuned as per all the guides for vps of that memory - amount of preforks, spares etc. But for the FIRST load a site after having not visited for a while, it's taking ages. First load: Parse Time: 3.576 - Number of Queries: 50 - Query Time: 0.019723195953369 Reload Parse Time: 0.096 - Number of Queries: 39 - Query Time: 0.0066126374511719 Subsequent reloads will be at this speed. htop shows two items as soon as I load that page for the first time: php-cgi /usr/sbin/apache2 -k start I'm using suPHP but I've tried fast-cgi and cgi. Stuck now, a weekend of tweaking has brought me nothing. Advice appreciated.

    Read the article

  • What would be the total expenses of running an in-home web server?

    - by techaddict
    I have several hosting accounts with companies, and plan to keep them. However, I would like to try my hand at creating my own home web-server, both for fun and for the learning experience. I would like to know the expenses involved, including: electricity costs (greater than light bulb?) internet costs (will I have to upgrade my internet? Or is 3-5Mbps upload speed fine for a web server with medium amount of traffic? Would I have to get a separate internet connection?) other unknown expenses Consider that I will configure the web server myself, so that is not an expense. Also consider that I already have a computer (year-old Dell laptop, 15R) to use to be dedicated as the web server.

    Read the article

  • Can I put a laptop Core i7 CPU in a desktop?

    - by Weezy
    One of the most important thing to me for a CPU is a good mix between speed and heat. For example five years ago I bought a Core 2 Duo 6300 (max TDP 65W): I put a big heatsink on the CPU, no fans (I do hate moving parts and noise) and it worked like a charm and very silently for five years (and it still work but five years later I wouldn't mind a faster CPU and a faster memory controller and more memory). I consider a max TDP of 130W unacceptable (like some high-end Core i7 have), for several reasons. So I was wondering: can I build a desktop and put a Core i7 CPU meant to be used in laptop in it? For example I was thinking about the Core i7 740QM (max TDP 45W [!]). Are these compatible with desktop Core i7 motherboards? (for example on NewEgg it says that the "CPU socket type" for the Core i7 740QM is PGA988, I've not too sure about what this is)

    Read the article

  • Anytime I upload something, my internet slows down extremely. What can I do?

    - by Earlz
    Title says it all. For a bit more info though: Basically, I have Time Warner cable internet. My speeds maintain a stable 2Mbit/s upload and 20Mbit/s download with average ping times around 30ms. This crazy thing happens though when I upload anything. I went to upload a 200M file to my server today through sftp and my internet completely choked up. I speed tested it during this upload and my ping time was around 800ms, download speeds of 0.2Mbit/s and Upload speeds of 0.3Mbit/s. Note, I wasn't downloading anything during this time either. It is just straight upload. What is it that causes this phenomenon? My router is OpenBSD. Is there anything I could set up to fix this problem(by queues or some such), or is this a problem with cable internet?

    Read the article

  • Upgrading my home network to Gigabit Ethernet and Wireless-N turns out slower than before

    - by Raheel Khan
    My home network has three desktops, three laptops and some NAS drives. All desktops and NAS drives support Gigabit LAN and all laptops support Wireless-N. I was running a 100 BaseT switch though. I recently purchased a Gigabit Ethernet Switch and an Wireless-N ADSL Modem-Router. After upgrading, I noticed that the wireless file transfer speeds from laptop-to-NAS and vice versa became terribly slow. Possibly even slower than before the upgrade. The transfer speeds from desktop-to-NAS (wired) have improved though. As an example, copying a 50GB file from laptop-to-NAS was estimated at 15 hours! Is there something I can do to improve this? Also, should I consider buying a dedicated wireless access point for speed rather than using the Wireless modem-router?

    Read the article

  • Processor speeds on my machine don't live up to manufacturer hype

    - by atch
    Why am I not seeing the promised speed claims of processor manufacturers on my computer? Producers of processors claim that their product can perform so many thousands (or millions) of operations per second. And yet on my machine (4GB, 3500hz), the typical program (Word, Visual Studio etc.) takes at least 10 seconds to start. I've formatted my hard drive and ticked all the necessary boxes to optimize my machine and yet I'm not seeing the promised speeds. Say it takes Outlook ten seconds to load. How many millions of operations does it really go through in order to start up?

    Read the article

  • Can Acrobat 11 be made to do OCR using multiple CPU cores?

    - by tarcman.
    OCR processing takes time. Using multiple CPU cores would speed up processing. Acrobat 10 was not a multithreaded application. How about Acrobat 11? Does 11 by default do OCR using multiple CPU cores (if available)? If not, are there any workarounds, e.g. scripting, to help make Acrobat 11 do OCR using multiple CPU cores? Either through Acrobat's built in scripting language or using external scripts that launch and direct multiple single thread instances of Acrobat to in parallell to parts of the processing job. Note: This question is not too localized (not limited to a specific moment in time) because (1) Adobe does not release new major Acrobat versions very often (Acrobat 10 was released two years ago) and (2) Adobe Acrobat is a widely used application.

    Read the article

  • What are the minimum hardware requirements for the latest version of Android Jelly Bean OS?

    - by Stom
    I searched around, and there's no information that points exactly to the suggested, minimum, or otherwise dated information containing specifications on this. I want to install a newer version of Android on an older ZTE-X500 MetroPCS smartphone. However, I'd like to know the backwards compatibility in regards to using a newer featured OS with lackluster, limited hardware compared to today's smartphones, such as Galaxy S4. However, I still wish to do this. If Jelly Bean is too demanding, I will set up Honeycomb, or get a modified Honey Comb ROM, or tweak the source to my preferences. However, nothing outlines the specifics of the "system requirements" it suggests for optimum performance, such as RAM, processor speed, processor features, and/or any other features, like DMA, video circuit advancements, and/or sound and special hardware requirements noted as well. Please, if you will, point me to a source that mentions this, and please tell do not link me to any PDF file formats. Thank you. PS: I'm a computer programmer.

    Read the article

  • Should I install Windows 7 on a 3 years old PC?

    - by Jitendra vyas
    This is my PC configuration, Should I upgrade my Windows XP to Windows 7. Currently I'm using Windows XP SP3 32 bit. Now will I get same performance or better performance or bad performance if I install Windows 7 on this system? Or would sticking with XP be better? Memory (RAM): 1472 MB DDR RAM (not DDR 2) CPU Info: AMD Sempron(tm) Processor 2500+ CPU Speed: 1398.7 MHz Sound card: Vinyl AC'97 Audio (WAVE) Display Adapters: VIA/S3G UniChrome Pro IGP | NetMeeting driver | RDPDD Chained DD Network Adapters: Bluetooth Device (Personal Area Network) | WAN (PPP/SLIP) Interface Hard Disks: 300 GB SATA HDD Manufacturer: Phoenix Technologies, LTD Product Make: MS-7142 AC Power Status: OnLine BIOS Info: AT/AT COMPATIBLE | 01/18/06 | VIAK8M - 42302e31 Motherboard: MICRO-STAR INTERNATIONAL CO., LTD MS-7142 Modem: ZTE USB Modem FFFE CDMA :

    Read the article

  • Connect two subnets without router

    - by Shcheklein
    I got two Comcast routers with two different subnets on each. Every subnet contains 5 static IPs. Two questions: Are there any problems if both routers and machines from both subnets are connected into one switch? Security issues doesn't matter there. I need to know if there are some performance or other problems. Is it possible to make machines from different subnets to see each other if they all are connected into one switch? Some static routing, add ARP records or somethig else ... I just want to avoid configuring second ethernet adaptors, third router or something. And I need to connect these subnets vai high-speed local network.

    Read the article

  • Slow Network Performance with Windows Server 2008 SP1

    - by Axeva
    I recently installed Service Pack 1 for Windows Server 2008. Since that time, network performance has been awful. Both Windows 7 and Mac Snow Leopard clients have seen miserable speeds when trying to read or write to the server. This is the exact update: Windows Server 2008 R2 Service Pack 1 x64 Edition (KB976932) It's a very simple file server setup. No Domain or Active Directory. Essentially just shared folders. It's Windows Web Server that I'm running. Are there any settings I can tweak? Should I roll back the update (doesn't seem wise)? Update: I've turned off the Power Management for the Network Adapter. That may help. If it doesn't have to be powered on at the start of a request, it should speed things up. Or so I would assume.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >