Search Results

Search found 15172 results on 607 pages for 'array intersect'.

Page 324/607 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Use an environment variable in a launchd script

    - by sirlancelot
    I'm curious if it's possible to specify an envrionment variable in the ProgramArguments portion of a luanchd script on Mac OS X Leopard. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>me.mpietz.MountDevRoot</string> <key>ProgramArguments</key> <array> <string>/bin/sh</string> <string>$HOME/bin/attach-devroot.sh</string> <!-- Instead of using... <string>/Users/mpietz/bin/attach-devroot.sh</string --> </array> <key>RunAtLoad</key> <true/> </dict> </plist>

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • I flashed my DS4700 with a 7 series firmware, now my DS4300 cannot read the disks I moved to that lo

    - by Daniel Hoeving
    In preparation for adding a number of 1Tb SATA disks to our DS4700 I flashed the controller firmware from a 6 series (which only supports up to 2Tb logical drives) to a 7 series (which supports larger than 2Tb logical drives). Attached to this DS4700 was a EXP710 expansion drawer that we had planned to migrate out to our co-location to allieviate the storage issues we were having there. Unfortunately these two projects were planned in isolation to one another so I was at the time unaware of the issue that this would cause. Prior to migrating the drawer I was reading the "IBM TotalStorage DS4000 EXP700 and EXP710 Storage Expansion EnclosuresInstallation, User’s, and Maintenance Guide" and discovered this: Controller firmware 6.xx or earlier has a different metadata (DACstore) data structure than controller firmware 7.xx.xx.xx. Metadata consists of the array and logical drive configuration data. These two metadata data structures are not interchangeable. When powered up and in Optimal state, the storage subsystem with controller firmware level 7.xx.xx.xx can convert the metadata from the drives configured in storage subsystems with controller firmware level 6.xx or earlier to controller firmware level 7.xx.xx.xx metadata data structure. However, the storage subsystem with controller firmware level 6.xx or earlier cannot read the metadata from the drives configured in storage subsystems with controller firmware level 7.xx.xx.xx or later. I had assumed that if I deleted the logical drives and array information on the EXP710 prior to migrating it to the DS4300 (6.60.22 firmware) this would satisfy the above, unfortunately I was wrong. So my question is a) Is it possible to restore the DAC information to its factory settings, b) What tool(s) would I use to accomplish this, or c) is this a lost cause? Daniel.

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

  • apache-user & root access

    - by ahmedshaikhm
    I want to develop few scripts in php that will invoke following commands; using exec() function service network restart crontab -u root /xyz/abc/fjs/crontab etc. The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above). Following are my configurations; My /etc/sudoers root ALL=(ALL) ALL apache ALL=(ALL) NOPASSWD: ALL %wheel ALL=(ALL) ALL %wheel ALL=(ALL) NOPASSWD: ALL As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows; my httpd.conf User apache Group wheel my PHP script exec("service network start", $a); print_r($a); exec("sudo -u root service network start", $a); print_r($a); Output Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing? Thanks a lot!

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • restoring a failed SBS 2000 box...

    - by Brad Pears
    Hi there, I read a post where you had mentioned you have had a PERC card blow up on you in an SBS box... I've got a similar situation where one of my RAID drives failed and then the power supply failed before I could replace the drive... I then replaced the power supply and the failed drive and reconfigured the RAID array. I had a recent full backup of the my Win2k SBS's C: drive stored on my SYmantec backup exec server so I installed win2K server on the c: partition and then once I had that up and running, installed the backup exec agent so as to do a restore of the entire c drive including system state. THis all worked just fine, until I had to reboot. I received an "incorrect drive configuration" error and then it hangs. I figure that likely makes sense becasue I think my RAID array is configured slightly different now in that the partitions may be sizeded ever so slightly differently now than they were before I think... Is there a way I can just restore from my backup BUT maybe exclude some of the registry and hidden boot files it wants to restore so that it is booting with the current configuration now active on that machine - not the pre blow up configuration files? I also read a post that indicated you might have to install the exact same service pack etc... etc.. before attemting a restore but that does not make sense to me being as the entire c drive contents are going to be overwritten by the restore anyway? THe basic OS install is just to be able to get the backup exec agent installed . I can;t understand why one would need to install the exact same SP level. CAn you shed some light on what I might be able to do to get this thing up and running? Thanks,Brad

    Read the article

  • Can't install SB750 RAID drivers in Windows 7 for two additional storage drives

    - by jf46
    Mobo: ASUS M4A79XTD, 790X/SB750 OS: Windows 7 x64 I currently have SATA 1-4 set to RAID and SATA 5-6 set to IDE. I have an SSD connected to SATA5 with Windows 7 installed on it, and that works fine. I also have configured a RAID 1 array of two 1TB HDDs, connected to SATA 1 and 2. These don't show up in Windows, and I'm having trouble getting the RAID driver installed. I even tried booting from the Windows DVD and repairing or installing Windows, but when I navigated to the relevant .sys files on my motherboard's driver CD, Windows setup told me that the files in question weren't relevant to my hardware. To be clear: I'm not trying to install Windows on a RAID. I have Windows installed on a separate disk on a separate SATA controller. I just want to get the SB750 RAID drivers installed so that the Windows disk utility can see my RAID 1 array, which is composed of two other disks. Do I need to wipe my SSD and reinstall Windows to get the RAID driver installed? That seems kind of ridiculous, and given what I described above, I'm not even sure it would work. Any help or guidance would be appreciated - thanks! Edit: Also, I've copied the driver files from the mobo CD into system32 and rebooted, no luck. Then I changed HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iaStorV\Start from 3 to 0, and that didn't work either.

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Logstash agent doesn't run as a daemon on MAC OS X 10.9.1

    - by user329324
    I need to run the logstash agent as a Daemon on an MAC OS X System whenever the system boots up terminal: /usr/local/logstash/bin/logstash agent -f /usr/local/etc/cvlog.conf Per terminal the program is working succesfully but as an daemon it doesn't start. My com.bcd.logstash.plist <plist version="1.0"> <dict> <key>Label</key> <string>com.bcd.logstash</string> <key>KeepAlive</key> <dict> <key>SuccessfulExit</key> </false> </dict> <key>ProgramArguments</key> <array> <string>/usr/local/logstash/bin/logstash</string> <string>agent</string> <string>-f</string> <string>/usr/local/etc/cvlog.conf</string> </array> <key>RunAtLoad</key> </true> </dict> </plist> I start with: launchtl load /Library/LaunchDaemons/com.bcd.logstash.plist Syslog Error Message com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:1 com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:143 What's wrong with my plist?

    Read the article

  • Windows 7 - User profile corrupted on standby/hibernate

    - by Dogbert
    I have a friend who uses Windows 7 for her home PC. She has a RAID1 array that is using up-to-date Intel Matrix storage drivers, and the entire array is backed up to a separate internal SATA HDD via Acronis True Image every night. Over the weekends, she lets her machine go into suspend after 4 hours of inactivity, and then later into hibernation after 6 hours of inactivity. Her Acronis backup system does nightly incremental backups, and full backups every Saturday night. She also has AVG Free Antivirus installed which does full scans every Monday. So far, on two occasions, on Sunday morning, her user profile is corrupt. I couldn't find any solution that allowed me to repair her profile, so I end up having to (as suggested by MS Knowledge Base, http://windows.microsoft.com/en-us/windows7/fix-a-corrupted-user-profile ;, Yep, no solution to fixing it, just clobber the whole thing) recreating her profile, then copying over data, recreating her Outlook profile, reconfiguring all third-party applications, etc. It's a real nightmare, and takes 4 hours to do each time. Are there any suggestions on how to resolve this profile corruption? This was happening even before the RAID/Acronis solution was in place, but I thought I'd provide as much information as possible.

    Read the article

  • Add static route through DHCP

    - by MathieuK
    I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route. So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like: <key>dhcp_option_33</key> <data>[some base64 goes here]</data> .. and restarting the DHCP service. On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key: <key>DHCPRequestedParameterList</key> <array> ... keys snipped for brevity ... <integer>33</integer> </array> .. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule. Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • How to create partition when growing raid5 with mdadm.

    - by hometoast
    I have 4 drives, 2x640GB, and 2x1TB drives. My array is made up of the four 640GB partitions and the beginning of each drive. I want to replace both 640GB with 1TB drives. I understand I need to 1) fail a disk 2) replace with new 3) partition 4) add disk to array My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later? Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive? fdisk info: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe3d0900f Device Boot Start End Blocks Id System /dev/sdb1 1 77825 625129281 fd Linux raid autodetect /dev/sdb2 77826 121601 351630720 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc0b23adf Device Boot Start End Blocks Id System /dev/sdc1 1 77825 625129281 fd Linux raid autodetect /dev/sdc2 77826 121601 351630720 83 Linux Disk /dev/sdd: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x582c8b94 Device Boot Start End Blocks Id System /dev/sdd1 1 77825 625129281 fd Linux raid autodetect Disk /dev/sde: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xbc33313a Device Boot Start End Blocks Id System /dev/sde1 1 77825 625129281 fd Linux raid autodetect Disk /dev/md0: 1920.4 GB, 1920396951552 bytes 2 heads, 4 sectors/track, 468846912 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000

    Read the article

  • Problems with bash script, mysql inserts and launchd

    - by Armands
    ========= I am developing a automated system, which consists of 3 parts: mysql, bash and launchd. Bash script takes folders of work related stuff, zips, archives and puts info about them into database that is located on a local MAMP server. Everything works as expected when I run the script from terminal. But when I use Launchd to automatically run this script, it functions without errors and it does not put the values into database. I've tried to make logs of returned messages, but the logs end up being empty as the command has run the way it was supposed to. Any help would be appreciated! .plist contents <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.adevo.ari.zip</string> <key>ProgramArguments</key> <array> <string>/Volumes/Archive-Plus/B-ARCHIVE-PLUS/ZZ_UTILITY_FOLDER/Compress.sh</string> </array> <key>Nice</key> <integer>1</integer> <key>StartInterval</key> <integer>120</integer> <key>RunAtLoad</key> <true/> </dict> </plist> I made this .plist file just by searching the web.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • How to properly remove disk from PERC 6/i RAID controller ?

    - by Stefano Borini
    I have a Dell T710, coming with PERC 6/i RAID controller. The current raid has 2x500 GB hard drives (with the OS), and 6x1000 GB hard drives (in RAID-6, currently empty). I would like to take one 1000 GB disk physically out to keep as an immediate spare in case of a crash, and configure the remaining 5x1000 GB in a single VD RAID-6. This is all nice and clean and works, until I realized that the display on the machine reports the lack of the 8th disk as an error. It's marked as error, but appears to be a warning, since the machine is fully functional. My question is: what is the best way to keep one disk as a spare out of the array? should I disassemble the disk from the cradle and insert the empty cradle in the array ? Or should I just silence the error in the display in some way (how?). I know that what I am doing sounds pretty strange, but here is academia and having a spare disk available could take weeks. Better to have one ready in my drawer for any emergency.

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >