Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 194/230 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • External Hard Drive Won't Mount - MAC OSX

    - by dtj
    I have a Western Digital hard drive that's about 4 or 5 years old. It's 500 GB, USB. I use it to backup my Mac every so often. I had it partitioned: 1 side for full backups, and the other side for general storage of music, installers, etc. I decided to get rid of the partition today and dump all the data. So I opened disk utility, and hit 'erase'. It started thinking and then disk utility crashed. After the crash, the hard drive won't mount, however disk utility still sees the drive, but not the individual volume within. I tried booting up Disk Warrior and no luck there either. It has the drive as an "unknown drive". When I hit rebuild, it goes through all it steps and then stops cause of this error: The drive "unknown" is severely damaged and DiskWarrior is unable to determine its case sensitivity What can I do at this point? There isn't any physical damage to the drive. Never been dropped or anything.

    Read the article

  • How to stop Excel 2003 from loading a Gazillion files

    - by Gary M. Mugford
    One of my soon-to-be-ex-friends got an Excel file from another friend of his and decided to click on it. It started opening all kinds of files from within Excel. Over 200 and still counting when he called me. I told him to go to task manager, which showed a LOT of files in the applications tab, but only one Excel.exe in the processes tab. Closing it down there, closed down Excel. I then CrossLooped in to see if I could give him a helping hand. Each time Excel was re-opened, the mass influx of files started. They were all kinds of files, PDFs, Docs, JPGs, even some spreadsheets. It looked like the end of solitaire, with multiple windows opening (XP) and the counter on the lone Excel button on the task bar counting off the files. I did the task manager exit routine and went looking for temp files. I CrapCleaned out the system. Made sure I went through the files created in the last hour and deleted anything with a temp anywhere in it. I also deleted the crappy infected/corrupted file from it's place on the desktop (yeah, I know, I yelled for 15 minutes on THAT subject). Despite a delousing, the restart of Excel, which complained of a deactivated add-in, would start the cascading windows, whether I answered yes or no to that question. Yes, it knew it had a serious crash, but why would it just keep on trying to load the bad file, even when I got rid of it? But here's the real question. WHERE was it loading from? I went through the backup folder and NOTHING was there! So what's the process for starting Excel WITHOUT it trying to do a crash recovery? Sort of makes me feel stupid at times. Thanks for any light you can shed on this issue. GM

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • SAS vs Near-line SAS vs SATA

    - by David
    I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent. The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that? "Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper). Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that work out now? And what's the deal with Near-line SATA? I apologise about the rambling tone in this message, it's 5am and I haven't slept much.

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Web Deploy 3.0 Installation Fails

    - by jkarpilo
    I am having difficulty installing Microsoft Web Deploy 3.0 to a Windows Server 2008 R2 box. I have tried installing with both the Web Platform Installer and the MSI package but installation fails while trying to execute the MSI custom action ExecuteRegisterUIModuleCA. This server is a VM and a member of a farm but shared config is disabled while I'm installing. Here's the point at which it fails in the MSI log (starting at line 1875): MSI (s) (80:FC) [15:29:01:358]: Executing op: ActionStart(Name=IISBeginTransactionCA,,) MSI (s) (80:FC) [15:29:01:374]: Executing op: CustomActionSchedule(Action=IISBeginTransactionCA,ActionType=3073,Source=BinaryData,Target=IISBeginTransactionCA,) MSI (s) (80:A8) [15:29:01:374]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6C6A.tmp, Entrypoint: IISBeginTransactionCA MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISRollbackTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISRollbackTransactionCA,ActionType=3329,Source=BinaryData,Target=IISRollbackTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISCommitTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISCommitTransactionCA,ActionType=3585,Source=BinaryData,Target=IISCommitTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISExecuteCA,,) MSI (s) (80:FC) [15:29:01:452]: Executing op: CustomActionSchedule(Action=IISExecuteCA,ActionType=3073,Source=BinaryData,Target=IISExecuteCA,CustomActionData=1^3^21^WebDeployment_Current^154^Microsoft.Web.Deployment.UI.PackagingModuleProvider, Microsoft.Web.Deployment.UI.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^3^28^DelegationManagement_Current^171^Microsoft.Web.Management.Delegation.DelegationModuleProvider, Microsoft.Web.Management.Delegation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^7^38^system.webServer/management/delegation^4^Deny^16^MachineToWebRoot^0^^3^yes^1^7^31^system.webServer/wdeploy/backup^4^Deny^20^MachineToApplication^0^^2^no^) MSI (s) (80:84) [15:29:01:452]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6CB9.tmp, Entrypoint: IISExecuteCA 1: IISCA IISExecuteCA : Begin CA Setup 1: IISCA IISExecuteCA : CA 'ExecuteRegisterUIModuleCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : CA 'IISExecuteCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : End CA Setup CustomAction IISExecuteCA returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 15:29:05: InstallFinalize. Return value 3. I can't seem to find any information regarding this particular issue; can someone help point me in the right direction?

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • Accidently overwrote system.dbf - What now?

    - by Filip Ekberg
    I accidentally overwrote system.dbf in /usr/lib/oracle/xe/oradata/XE/system.dbf Well I did not actually do it accidentally, however I overwrote it because of other failures in the database. And when I try running the following: SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1258488 bytes Variable Size 92277768 bytes Database Buffers 192937984 bytes Redo Buffers 2932736 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now I want to try to Recover the database because starting it in mounted or standard surely doesn't work. SQL> recover database using backup controlfile; ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01122: database file 1 failed verification check ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01206: file is not part of this database - wrong database id How do I solve this? Is it even possible? My "real" problem was that I ran the /etc/init.d/oracle-xe configure and it overwrote my old configuration and probably removed passwords and such so my tables were gone, however I found the mytablespace.dbf so I hope that it is possible to recover? Please shed some light on this.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • Error while detecting start profile for instance with ID _u

    - by Techboy
    I am upgrading a SAP Java instance and am getting the error shown in the log below. There are not backup profile files in the profiles directory. The string _u does not exist in the default, start or instance profile for instance SCS01. Please can you suggest what the issue might be? Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax758x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been started. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax759x [Thread[main,5,main]]: Phase type is com.sap.sdt.j2ee.phases.PhaseTypeDetermineProfiles. Mar 13, 2010 12:09:01 PM [Info]: ...ap.sdt.j2ee.phases.PhaseTypeDetermineProfiles.checkVariablesxPhaseTypeDetermineProfiles.javax284x [Thread[main,5,main]]: All 4 required variables exist in variable handler. Mar 13, 2010 12:09:01 PM [Info]: ....j2ee.tools.sysinfo.AbstractInfoController.updateProfileVariablexAbstractInfoController.javax302x [Thread[main,5,main]]: Parameter /J2EE/StandardSystem/DefaultProfilePath has been detected. Parameter value is \server1\SAPMNT\PI7\SYS\profile\DEFAULT.PFL. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax340x [Thread[main,5,main]]: Instance SCS01 with profile \server1\SAPMNT\PI7\SYS\profile\PI7_SCS01_server1, start profile \server1\SAPMNT\PI7\SYS\profile\START_SCS01_server1, and host server1 has been detected. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.ucp.phases.AbstractPhaseType.doExecutexAbstractPhaseType.javax862x [Thread[main,5,main]]: Exception has occurred during the execution of the phase. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax297x [Thread[main,5,main]]: Error while detecting start profile for instance with ID _u. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax905x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been completed. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax906x [Thread[main,5,main]]: Start time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax908x [Thread[main,5,main]]: End time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax909x [Thread[main,5,main]]: Duration: 0:00:00.078. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax910x [Thread[main,5,main]]: Phase status is error.

    Read the article

  • ESXi 4.0 Guests Locking up

    - by Brendan Sherwin
    I installed ESXi 4.0 on an HP Proliant g5 with a 64bit Xeon processor and took advantage of the free license as I work for a public school. I created two instances of server 2003 from scratch, one to be the DC, DHCP, the other to be a file server and DNS/DHCP backup. I had both guests up and running fine, setup my user accounts, transferred the data, etc etc. Once I joined a client machine to the domain, I would find that both of my Windows guests would lock up. Sometimes it would be for five or so minutes, once it was overnight. The "locked up" state means that as far I could tell, all services were stopped; dhcp no longer handed out IP's, DNS stopped working, I couldn't RDP into the server. The ESXi host, my HP server, was still running fine. VSphere was working, and I could look at the performance of the individual guests.I would try Powering off the hosts from inside VSPhere, and the hosts would start powering off, but get stuck at 95%, and stay that way, sometimes only for 10 minutes, others for hours. Several times I had to restart ESXi from it's console in order to restart my machines. Now, can anyone tell me what is happening, and how I can fix it, or take steps to prevent it? I hired a consultant to come take a look at it, someone who's experience and knowledge I trust, and he told me he had never seen anything like this ever before. He spoke to a friend of his who is VM certified, and he also said he had never heard of this issue. Thanks for your replies, and I'll do my best to respond ASAP. Currently, the server is powered off, and I've reinstituted my nine year old Server 2000 boxes, and I'm considering installing ESXi 3.5. Does anyone know a host created in 4.0 will work in 3.5? I'd really like to avoid having to rebuild those accounts! I know 4.0 works on this server, as I have another server in another school with the same exact hardware running 4.0 fine. Brendan

    Read the article

  • How do I easily repair a single unreadable block on a Linux disk?

    - by Nelson
    My Linux system has started throwing SMART errors in the syslog. I tracked it down and believe the problem is a single block on the disk. How do I go about easily getting the disk to reallocate that one block? I'd like to know what file got destroyed in the process. (I'm aware that if one block fails on a disk others are likely to follow; I have a good ongoing backup and just want to try to keep this disk working.) Searching the web leads to the Bad block HOWTO, which describes a manual process on an unmounted disk. It seems complicated and error-prone. Is there a tool to automate this process in Linux? My only other option is the manufacturer's diagnostic tool, but I presume that'll clobber the bad block without any reporting on what got destroyed. Worst case, it might be filesystem metadata. The disk in question is the primary system partition. Using ext3fs and LVM. Here's the error log from syslog and the relevant bit from smartctl. smartd[5226]: Device: /dev/hda, 1 Currently unreadable (pending) sectors Error 1 occurred at disk power-on lifetime: 17449 hours (727 days + 1 hours) ... Error: UNC at LBA = 0x00d39eee = 13868782 There's a full smartctl dump on pastebin.

    Read the article

  • Can't boot flash drive on GIGABYTE motherboard

    - by Deltik
    Situation When I try to boot from my flash drive, my GIGABYTE 970A-UD3 motherboard returns this: Loading Operating System ... Boot error All other motherboards I've tried support booting from that flash drive (and a backup flash drive). The operating systems I tried on both flash drives were created with usb-creator-gtk (Ubuntu USB Startup Disk Creator). I know that the motherboard understands that there is an operating system on the flash drives because when I erase them, it complains in an ALL CAPS RAGE that there isn't an operating system, which is correct. How can I boot a flash drive that's bootable from other motherboards on this motherboard? Qualification This question is not a duplicate of this one because directly writing to the flash drive as an ISO 9660 (dd if=operating_system.iso of=/dev/sdb) still does not have the motherboard recognize the operating system. This question should be a duplicate of this one because I provide more information not provided by that poster. This forum thread has broken links and does not have a solution to my problem. Nobody knows what's going on in this forum thread.

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >