Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 364/925 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Format & Fresh Install Mac os x snow leopard in mac mini.

    - by sagar
    Hello Every one. I have purchased dvd of Snow leopard 10.6.2. But actually I purchased mac mini with 10.5.7 leopard I tried to install snow leopard 10.6.2. Everything went perfectly. system was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. ( means installation didn't format every thing - means installation was done on upgrade basis. ) Now, my system works with very low speed. Previous performance of mac mini was double as compare to current upgrade version. Now - my question are as follows. Does upgrade installation causes the performance in specially osx ? ( means anyone faced this kind of problem ? ) Or 10.6.2 snow leopard is heavy weight system for mac mini ? ( 2Ghz Intel core2duo,1GB RAM - is this configuration OK for snow leopard 10.6.2 ? ) Fresh install works better then upgrade in os x ?

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • Google chrome not accepting any security certificates

    - by Jerry
    I've recently developed a problem with Google Chrome that's really annoying. I'm using Firefox at the moment with no problems whatsoever and it's the same with IE, so it's safe to say this problem is specific to Chrome. The problem is that it's not accepting security certificates from certain sites. I suppose the best place to start would be google itself. I can't search. The google search page will load but when I type some search term into the search box and hit 'search' I get the message: "You attempted to reach www.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain." No matter what the search term is, this is the result. Also when I try to log in to facebook - same message. Youtube works and many other sites that I know present security certs so I'm baffled. I've searched and there are other people who have had similar issues but I can't find a solution anywhere. The most common answer I'm picking up for this is to "check your system time" but I can safely say that it's not my system time. If anyone knows what is going on, I'd very much appreciate being informed. It's not super urgent as I can use Firefox to access those places Chrome won't, but it IS super annoying because I can usually sort out issues like this in no time.

    Read the article

  • Please advise on VPS config choice (with SQL Server Express)

    - by tjeuten
    Hi all, I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu

    Read the article

  • Handle Sysinternals software does not accept -c parameter

    - by Alex
    I am trying to close a handle to a locked file in Windows, using Sysinternals Handle software (http://technet.microsoft.com/en-us/sysinternals/bb896655). First I search for opened handle: handle.exe "C:\Temp" It issues me the following: Far.exe pid: 1144 type: File 2E8: C:\Temp Far.exe pid: 1144 type: File 3A8: C:\Temp Next I run handle.exe with -c parameter. However, whichever number I enter, it does not do anything. I have tried both: 1144, 2E8, 3A8 and 1144 in hex (478) as the software help says it accepts PID in hexademic. Whatever I enter, it just issues the following: Handle v3.46 Copyright (C) 1997-2011 Mark Russinovich Sysinternals - www.sysinternals.com usage: handle [[-a [-l]] [-u] | [-c <handle> [-y]] | [-s]] [-p <process>|<pid>] [name] -a Dump all handle information. -l Just show pagefile-backed section handles. -c Closes the specified handle (interpreted as a hexadecimal number). You must specify the process by its PID. WARNING: Closing handles can cause application or system instability. -y Don't prompt for close handle confirmation. -s Print count of each type of handle open. -u Show the owning user name when searching for handles. -p Dump handles belonging to process (partial name accepted). name Search for handles to objects with <name> (fragment accepted). No arguments will dump all file references. What am I doing wrong?

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • SATA HDD stuttering on reads (LED on)

    - by jdehaan
    I hope I can get some new ideas to solve the issue. What I have: AMD Phenom II X4 905e Box, Sockel AM3 Gigabyte GA-MA790XTA-UD4, AMD 790X, AM3 ATX WD Caviar GreenPower 1,5TB SATA II 4GB Kit OCZ DDR3 PC3-10666 Platinum Low-Voltage CL7 HIS HD 4670 iSilence4 DDR3 1024MB Native HDMI Dual-DVI, PCI-Express Behavior: Everything works normally excepting the HDD accesses. The drive seems to get stuck in reading data (LED drive by Mainboard is on for a while) then recovers and goes on. There is a delay of more than 30s sometimes. Often it is a matter of 5-10s. This does not seem related to the workload of the disk. What I tried so far: Use different SATA port: I switched from the SATA3 to SATA2 then the SATA driven by the CPU, no difference. Turned AHCI off, drove the SATA in IDE mode, same behavior, now I felt like the HDD has problems Deactivated energy saving settings (BeQuiet, ...), disabled 3 CPUs (msconfig) supposing some race condition or trouble with changing freqs, same trouble With Windows7 Professional 64Bit, I installed the hotfix http://support.microsoft.com/?scid=kb%3Ben-us%3B976418&x=8&y=9 with no success. The problem description matches what I experienced, so I felt lucky but was quite in a sad mood after being deceived. Burned the DLG 5.04f DataLifeGuard CD from WD to check the drive. All tests passed, to my surprise! Installed an Ubuntu on the same drive i386 flavour. Drive still in IDE mode, exactly the same behavior. So the OS does not matter. What I did not try yet: http://www.gigabyte.eu/Support/Motherboard/BIOS_Model.aspx?ProductID=3263 I have a F2 BIOS and there is a F3a Beta, did someone have similar issues that were solved by this update? It is a beta so I fear upgrading a bit and I could find no release notes at all on the net, not really serious work maybe they had issues translating from Chinese??? Using another HDD drive Does someone have other hints or a similar issue? Maybe it is neither the HDD nor the Mainboard/BIOS?

    Read the article

  • Possible DNS issue after a reinstall of Windows Server 2000 (get off my lawn)

    - by cop1152
    I just replaced a drive on a Win2000 Server that replicates AD and issues out DHCP at one of our offices. I successfully joined it to the domain, setup range of IP's, etc, but am still having issues. I cannot RDC to it with name or IP. I can ping it, browse to it with Windows Explorer, and remote to it with some other software, but not RDC. The other issue is this: Users are unable to authenticate on it. They receive the message 'username or password incorrect' (or something like that). Changes made on the main domain controller seem to take forever to trickle down. The most significant entry in the DNS Server Log is Event ID 7062: The DNS Server Encountered a Packet Addressed to Itself. At least, I think its significant. The Directory Services Log shows numerous Event IDs 1265: The attempt to establish a replication link with parameters failed with the following status: The DSA operation is unable to proceed because of a DNS lookup failure. Does this make any sense to anyone? I feel like its something very simple that I am overlooking. Thanks in advance.

    Read the article

  • Problem hosting server behing personal router

    - by Venkatesh Hodavdekar
    I recently bought the domain name lucidcontraptions.com and want to host the website from home. I have a D-Link router in which I have set up my personal virtual server correctly. My application server is Apache 2.2. The server works perfectly with the following settings: External IP: 207.172.xx.xx. Public port: 8888 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx:8888/ the server works perfectly and my Apache page shows up without any issues, both from inside the intranet as well as outside. This setting would not work out for me as I am not allowed port numbers in my DNS management. Now when I tweak the settings to the following: External IP: 207.172.xx.xx. Public port: 80 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx/ the server works perfectly and my Apache page shows up without any issues, BUT ONLY FROM INSIDE THE INTRANET. This page does not show up for people outside the intranet.

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Format & Fresh Install Mac OS X Snow Leopard on Mac mini.

    - by sagar
    Hello Every one. I have purchased a DVD of Snow Leopard (Mac OS X 10.6.2) I purchased a Mac mini with Leopard (Mac OS X 10.5.7) I tried to install Mac OS X 10.6.2 Everything went perfectly. System was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. (means installation didn't format every thing - means installation was done on upgrade basis.) Now, my system works with very low speed. Previous performance of Mac mini was double as compare to current upgrade version. Now - my question are as follows. Does an upgrade installation causes the performance issues in Mac OS X? Or is Snow Leopard too demanding for the Mac mini? ( 2 Ghz Intel Core 2 Duo, 1GB RAM - is this configuration OK for Snow Leopard? ) Does a fresh install work better than an upgrade?

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Hard Drive benchmark values show write very very slow

    - by John
    I recently started to have issues with my laptop being very slow. I ran a hard drive benchmarking tool (by ATTO) that showed that the write speed was very very slow on my boot drive. I ran the same benchmark on my usb drive and it was 650 times faster than my boot drive when it came to writing. Reading is very fast/normal on both. I swapped out an identical drive and ran the same benchmark. This time the drive showed proper write speed. Thinking that I had a hard drive going bad I cloned the old one onto the new one. I managed to clone the problem too. Anyone have any ideas on what in WinXP SP3 might be causing the write issues? I am on a corporate network and we have commercial anti-virus software installed. (AVG I think) I regularly run defraggler and have about 40 gig free on a 100 gig drive. The machine has 4 gigs of memory. Any ideas? TIA J

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • how to maitain the authentication details/passwords in a 50 people company

    - by sabya
    What is the process that you guys follow to maintain authentication details like login ids and passwords? There will be definitely some shared passwords. So, the target is to minimize the impact when someone is leaving the company. By "shared password", I mean, the account, which is shared among multiple people in the company. The issues that the process should address are: - Affected areas. Quickly find the resources to which the leaving user was having access to. Forgetting password. What happens if a user forgets an authentication details? How does he get it? I think he shouldn't ask a team mate. I mean no-verbal communication. Find dependencies of a resource. Suppose I am changing the password for a mail account, which is getting used by some automated scripts to send mails. Here, the scripts are dependent on the mail account, so changing the password of the mail account means we have to change the password in the script too. So, how do find all the dependencies of a resource? I'd prefer a process which addresses these issues. But you can also recommend products which are open source and not hosted. I have gone through PassPack, but they don't solve #4. There is a similar question here. But that does not exactly answer my question.

    Read the article

  • Cannot click send button in Outlook (+ Exchange) for unknown addresses

    - by Graphain
    Hi, I have a very unusual problem. I have Outlook 2010 connected to Exchange 2010. This can send emails perfectly to known addresses (that is, addresses in the address book or ones that have been sent to previously). However, if I put in an address that is unknown, I cannot actually click the Send button in Outlook. (it simply does nothing). Corresponding to this I get errors in the Event Log for each Send click stating "The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action.". However, Outlook shows as connected the whole time, pings do not break, and I have no reason to suspect it has lost connection. To further complicate matters, Outlook is fine on all other PCs, and this was all perfect until I installed BitDefender on the PC in question and the Exchange Server. Outlook was still fine on these other PCs while BitDefender was installed, but I have removed it from the PC in question and the Server just in case (no success). Summary: Outlook encounters Exchange connectivity issues when sending to unknown (new) email addresses that prevent the Send button actually working at all. This is isolated to one machine and occurred after installation of AV/Firewall software which has since been thoroughly removed. If you have any potential solutions I'd love to hear them, as I will be resorting to reformatting the PC in question, and probably removing Exchange because I'm sick of its issues if I cannot resolve this soon. Big thanks for any help.

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >