Search Results

Search found 8523 results on 341 pages for 'sun storage 7000 scriptin'.

Page 319/341 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • How to use jaxb_commons plugins from maven

    - by user243155
    I'm trying to use a jaxb plugin to insert a interface into a choice element generating the classes from maven. The problem is that I can't seem to figure out how to do so from maven, the repository isn't clear from the documentation and the only example (bellow) doesn't work, it seems to ignore the plugin (maven reports no error about not finding it) or the plugin doesn't have all the adds-ons currently listed in the project documentation: <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.6.1</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <generatePackage>br.com.wonder.nfe.xml</generatePackage> <args> <arg>-Xifins</arg> </args> <plugins> <plugin> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>basic</artifactId> <version>0.4.1.5</version> </plugin> </plugins> </configuration> </plugin> I have these in the root pom: <pluginRepositories> <pluginRepository> <id>maven2-repository.dev.java.net</id> <url>http://download.java.net/maven/2</url> </pluginRepository> <pluginRepository> <id>maven-repository.dev.java.net</id> <name>Java.net Maven 1 Repository (legacy)</name> <url>http://download.java.net/maven/1</url> <layout>legacy</layout> </pluginRepository> </pluginRepositories> Running that gives: Error while setting CmdLine options '[-Xifins, -episode, /home/administrador/JavaApp/wnfe3/wnfe-ejb/target/generated-sources/xjc/META-INF/sun-jaxb.episode]'! Embedded error: unrecognized parameter -Xifins

    Read the article

  • This code appears to achieve the return of a null reference in C++

    - by Chuck
    Hi folks, My C++ knowledge is somewhat piecemeal. I was reworking some code at work. I changed a function to return a reference to a type. Inside, I look up an object based on an identifier passed in, then return a reference to the object if found. Of course I ran into the issue of what to return if I don't find the object, and in looking around the web, many people claim that returning a "null reference" in C++ is impossible. Based on this advice, I tried the trick of returning a success/fail boolean, and making the object reference an out parameter. However, I ran into the roadblock of needing to initialize the references I would pass as actual parameters, and of course there is no way to do this. I retreated to the usual approach of just returning a pointer. I asked a colleague about it. He uses the following trick quite often, which is accepted by both a recent version of the Sun compiler and by gcc: MyType& someFunc(int id) { // successful case here: // ... // fail case: return *static_cast<MyType*>(0); } // Use: ... MyType& mt = somefunc(myIdNum); if (&mt) // test for "null reference" { // whatever } ... I have been maintaining this code base for a while, but I find that I don't have as much time to look up the small details about the language as I would like. I've been digging through my reference book but the answer to this one eludes me. Now, I had a C++ course a few years ago, and therein we emphasized that in C++ everything is types, so I try to keep that in mind when thinking things through. Deconstructing the expression: "*static_cast(0);", it indeed seems to me that we take a literal zero, cast it to a pointer to MyType (which makes it a null pointer), and then apply the dereferencing operator in the context of assigning to a reference type (the return type), which should give me a reference to the same object pointed to by the pointer. This sure looks like returning a null reference to me. Any advice in explaining why this works (or why it shouldn't) would be greatly appreciated. Thanks, Chuck

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Error in connecting Eclipse to SQL Server

    - by user3721900
    This is the syntax error Jun 10, 2014 5:15:51 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files (x86)\Java\jre7\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:/Program Files (x86)/Java/jre7/bin/client;C:/Program Files (x86)/Java/jre7/bin;C:/Program Files (x86)/Java/jre7/lib/i386;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microchip\MPLAB C32 Suite\bin;C:\Program Files\Java\jdk1.7.0_25\bin;C:\Program Files (x86)\Java\jdk1.7.0_03\bin;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\Google\google_appengine\;C:\Users\Patrick\Desktop\2013-2014 2nd Sem Files\Eclipsee\eclipse;;. Jun 10, 2014 5:15:51 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:B2B' did not find a matching property. Jun 10, 2014 5:15:51 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Jun 10, 2014 5:15:51 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Jun 10, 2014 5:15:51 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 544 ms Jun 10, 2014 5:15:51 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Jun 10, 2014 5:15:51 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.42 Jun 10, 2014 5:15:52 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8080"] Jun 10, 2014 5:15:52 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] Jun 10, 2014 5:15:52 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 374 ms com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near '`'. This is my code package b2b.fishermall; public class ConnectionString extends SqlStringCommands { public String getDriver(){ return "com.microsoft.sqlserver.jdbc.SQLServerDriver"; } public String getURL() { return "jdbc:sqlserver://localhost:1433;databaseName=B2B;integratedSecurity=true;"; } public String getUsername() { return ""; } public String getDbPassword() { return ""; } }

    Read the article

  • Postfix - am I sending spam?

    - by olrehm
    today I received like 30 messages within 5 minutes telling me that some mail I send could not be delivered, mostly to *.ru email addresses which I did not send any mail to. I have my own webserver (postfix/dovecot) set up using this guide (http://workaround.org/ispmail/lenny) but adjusted a little bit for Ubuntu. I tested whether I am an Open Relay which I am apparently not. Now there are two possible reasons for the above mentioned emails: Either I am sending out spam, or somebody wants me to think that, correct? How can I check this? I selected one particular address that I supposedly send spam to. Then I searched my mail.log for this entry. I found two blocks that record that somebody from the server connected to my server and delivered some message to two different users. I cannot find an entry reporting that anyone from my server send an email to that server. Does this mean its just some mail to scare me or could it still have been send by me in the first place? Here is one such block from the log (I replaced some confidential stuff): Jun 26 23:23:28 mycustomernumber postfix/smtpd[29970]: connect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: 044991528995: client=mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/cleanup[29974]: 044991528995: message-id=<[email protected]> Jun 26 23:23:29 mycustomernumber postfix/qmgr[3369]: 044991528995: from=<>, size=2198, nrcpt=1 (queue active) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) ESMTP::10024 /var/lib/amavis/tmp/amavis-20110626T223137-28598: <> -> <[email protected]> SIZE=2198 Received: from mycustomernumber.stratoserver.net ([127.0.0.1]) by localhost (rehmsen.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP for <[email protected]>; Sun, 26 Jun 2011 23:23:29 +0200 (CEST) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) Checking: YakjkrdFq6A8 [195.144.251.97] <> -> <[email protected]> Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: disconnect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) lookup_sql_field(id) (WARN: no such field in the SQL table), "[email protected]" result=undef Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: connect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: 0A1FA1528A21: client=localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/cleanup[29974]: 0A1FA1528A21: message-id=<[email protected]> Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: from=<>, size=2841, nrcpt=1 (queue active) Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: disconnect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) FWD via SMTP: <> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21 Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) Passed CLEAN, [195.144.251.97] [195.144.251.97] <> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: YakjkrdFq6A8, Hits: 2.249, size: 2197, queued_as: 0A1FA1528A21, 2882 ms Jun 26 23:23:32 mycustomernumber postfix/smtp[29975]: 044991528995: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.3, delays=0.39/0.01/0.01/2.9, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21) Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 044991528995: removed Jun 26 23:23:33 mycustomernumber postfix/smtp[29980]: 0A1FA1528A21: to=<[email protected]>, orig_to=<[email protected]>, relay=mx3.hotmail.com[65.54.188.110]:25, delay=1.2, delays=0.15/0.02/0.51/0.55, dsn=2.0.0, status=sent (250 <[email protected]> Queued mail for delivery) Jun 26 23:23:33 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: removed Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection rate 1/60s for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection count 1 for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max cache size 1 at Jun 26 23:23:28 I can provide more info if you tell me what you need to know. Thank you for you help!

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

  • Splitting a raidctl mirror safely

    - by milkfilk
    I have a Sun T5220 server with the onboard LSI card and two disks that were in a RAID 1 mirror. The data is not important right now but we had a failed disk and are trying to understand how to do this for real if we had to recover from a failure. The initial situation looked like this: # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size Level Disk ---------------------------------------------------------------- c1t0d0 136.6G N/A DEGRADED OFF RAID1 0.1.0 136.6G GOOD N/A 136.6G FAILED Green light on the 0.0.0 disk. Find / lights up the 0.1.0 disk. So I know I have a bad drive and which one it is. Server still boots obviously. First, we tried putting a new disk in. This disk came from an unknown source. Format would not see it, cfgadm -al would not see it so raidctl -l would not see it. I figure it's bad. We tried another disk from another spare server: # raidctl -c c1t1d0 c1t0d0 (where t1 is my good disk - 0.1.0) Disk has occupied space. Also the different syntax options don't change anything: # raidctl -C "0.1.0 0.0.0" -r 1 1 Disk has occupied space. # raidctl -C "0.1.0 0.0.0" 1 Disk has occupied space. Ok. Maybe this is because the disk from the spare server had a RAID 1 on it already. Aha, I can see another volume in raidctl: # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Volume:c1t132d0 (this is the foreign root mirror) Disk: 0.0.0 Disk: 0.1.0 ... No problem. I don't care about the data, I'll just delete the foreign mirror. # raidctl -d c1t132d0 (warning about data deletion but it works) At this point, /usr/bin/ binaries freak out. By that I mean, ls -l /usr/bin/which shows 1.4k but cat /usr/bin/which gives me a newline. Great, I just blew away the binaries (ie: binaries in mem still work)? I bounce the box. It all comes back fine. WTF. Anyway, back to recreating my mirror. # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Disk: 0.0.0 Disk: 0.1.0 ... Man says that you can delete a mirror and it will split it. Ok, I'll delete the root mirror. # raidctl -d c1t0d0 Array in use. (this might not be the exact error) I googled this and found of course you can't do this (even with -f) while booted off the mirror. Ok. I boot cdrom -s and deleted the volume. Now I have one disk that has a type of "LSI-Logical-Volume" on c1t1d0 (where my data is) and a brand new "Hitachi 146GB" on c1t0d0 (what I'm trying to mirror to): (booted off the CD) # raidctl -c c1t1d0 c1t0d0 (man says it's source destination for mirroring) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" -r 1 1 (alt syntax per man) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" 1 (assumes raid1, no help) Illegal Array Layout. Same size disks, same manufacturer but I did delete the volume instead of throwing in a blank disk and waiting for it to resync. Maybe this was a critical error. I tried selecting the type in format for my good disk to be a plain 146gb disk but it resets the partition table which I'm pretty sure would wipe the data (bad if this was production). Am I boned? Anyone have experience with breaking and resyncing a mirror? There's nothing on Google about "Illegal Array Layout" so here's my contrib to the search gods.

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • Benchmarking hosting providers IO with Bonnie

    - by Derek Organ
    Ok, because of a bunch of projects I'm working on I've access to dedicated Servers on a 3 hosting providers. As an experiment and for educational purposes I decided to see if I could benchmark how good the IO is with each. Bit of research lead me to Bonnie++ So I installed it on the server and ran this simple command /usr/sbin/bonnie -d /tmp/foo The 3 machines in different hosting providers are all dedicated machines, one is a VPS, other two are on some cloud platform e.g. VMWare / Xen using some kind of clustered SAN for storage This might be a naive thing to do but here are the results I found. HOST A Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxxxxx 1G 45081 88 56244 14 19167 4 20965 40 67110 6 67.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 15264 28 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ xxxxxxxx,1G,45081,88,56244,14,19167,4,20965,40,67110,6,67.2,0,16,15264,28,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ HOST B Version 1.03d ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxx 4G 43070 91 64510 15 19092 0 29276 47 39169 0 448.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24799 52 +++++ +++ +++++ +++ 25443 54 +++++ +++ +++++ +++ xxxxxxx,4G,43070,91,64510,15,19092,0,29276,47,39169,0,448.2,0,16,24799,52,+++++,+++,+++++,+++,25443,54,+++++,+++,+++++,+++ HOST C Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxx 1536M 15598 22 85698 13 258969 20 16194 22 723655 21 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14142 22 +++++ +++ 18621 22 13544 22 +++++ +++ 17363 21 xxxxxxxx,1536M,15598,22,85698,13,258969,20,16194,22,723655,21,+++++,+++,16,14142,22,+++++,+++,18621,22,13544,22,+++++,+++,17363,21 Ok, so first off what is the best way to read the figures and are there any issues with really comparing these numbers? Is this in any way a true representation of IO Speed? If not is there any way for me to test that? Note: these 3 machines are using either Ubuntu or Debian (I presume that doesn't really matter)

    Read the article

  • eth0 and eth1 both assigned same IP on boot

    - by Banjer
    I have a physical SLES 11 SP2 server on a Sun Fire x4140 that is giving me problems with networking upon reboot. The NICs are onboard. The networking appears successful during boot, but network services such as nfs fail hard. This is because eth0 and eth1 are both receiving the same configuration and are both ifup-ed. Once everything times out and I'm at the console, ifconfig shows that eth0 and eth1 are UP and running with the same IP. Attempting to ping anything in that subnet fails. Restarting the network service fixes the issue. eth0 is the correct NIC that should be configured as primary, per the MAC address. Question: Whats causing eth1 to be brought up with the same config as eth0?? I do not have a config script set up for eth1: banjer@harp:~> ls -la /etc/sysconfig/network/ total 104 drwxr-xr-x 6 root root 4096 Jun 11 12:21 . drwxr-xr-x 6 root root 4096 Apr 10 09:46 .. -rw-r--r-- 1 root root 13916 Apr 10 09:32 config -rw-r--r-- 1 root root 9952 Apr 10 09:36 dhcp -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth0 -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth3 -rw------- 1 root root 172 Feb 1 08:32 ifcfg-lo -rw-r--r-- 1 root root 29333 Feb 1 08:32 ifcfg.template drwxr-xr-x 2 root root 4096 Apr 10 09:32 if-down.d -rw-r--r-- 1 root root 239 Feb 1 08:32 ifroute-lo drwxr-xr-x 2 root root 4096 Apr 10 09:33 if-up.d drwx------ 2 root root 4096 May 5 2010 providers -rw-r--r-- 1 root root 25 Nov 16 2010 routes drwxr-xr-x 2 root root 4096 Apr 10 09:36 scripts On a side note, eth3 is also configured with an IP in a different subnet, but this has not posed any problems. FYI the kernel module being used is forcedeth. banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.21.64.25/20' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Here's eth3 in case you need to see it: banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth3 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.11.200.4/24' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Perhaps is something related to udev? 70-persistent-net-rules looks OK to me, but I may not understand it completely. banjer@harp:~> cat /etc/udev/rules.d/70-persistent-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4a", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4b", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4d", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" # PCI device 0x1077:0x3032 (qla3xxx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:c1:dd:0e:34:6c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4" Any other thoughts on what would cause this?

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • Diagnosing Solaris 8 server memory and swap space usage

    - by datSilencer
    Hello everyone. Essentially, my question is related to memory allocation for Solaris virtual machines. I am running a couple of old Sun ONE 6 Java web servers on two Solaris 8 virtual machines. I see that there's a reasonable amount of swap space being used, but I'm not exactly sure if this could indicate a need to add more RAM to these machines. At service peak hours (mornings usually), the response time of the web application these servers host jumps up to at most 11 seconds (somewhat detrimental for a relatively simple web page loading action). Average response time at non peak times is about 5 seconds. What would you be able to infer about the RAM usage for these machines from the ouput below? Is this information reasonably sufficient? Or would I need to run some other commands to rule out server memory starvation? Finally, since there is a Java application at the core of the setup, I've also thought about: 1) Trace the heap's Object allocation to detect potential memory leaks. 2) Do some performance profiling to see if this instead related to networking delays. I mention this since the application talks with a single Oracle Database, but I would doubt this to be the case since they're pretty close from a network segmentation perspective. I appreciate any kind of insight and feedback you could provide. Thanks for your time and help. Server 1: 40 processes: 38 sleeping, 1 zombie, 1 on cpu CPU states: 99.1% idle, 0.4% user, 0.4% kernel, 0.0% iowait, 0.0% swap Memory: 2048M real, 295M free, 865M swap in use, 3788M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 12676 webservd 112 29 10 616M 242M sleep 103:37 0.48% webservd 18317 root 1 59 0 23M 19M sleep 67:24 0.08% perl 9479 support 1 59 0 6696K 2448K cpu/1 0:11 0.05% top 8012 root 10 59 0 34M 704K sleep 80:54 0.04% java 1881 root 33 29 10 110M 13M sleep 33:03 0.02% webservd 7808 root 1 59 0 83M 67M sleep 7:59 0.00% perl 1461 root 20 59 0 5328K 1392K sleep 6:49 0.00% syslogd 1691 root 2 59 0 27M 680K sleep 4:22 0.00% webservd 24386 root 1 59 0 15M 11M sleep 2:50 0.00% perl 23259 root 1 59 0 11M 4240K sleep 2:42 0.00% perl 24718 root 1 59 0 11M 5464K sleep 2:29 0.00% perl 22810 root 1 59 0 19M 11M sleep 2:21 0.00% perl 24451 root 1 53 2 11M 3800K sleep 2:18 0.00% perl 18501 root 1 56 1 11M 3960K sleep 2:18 0.00% perl 14450 root 1 56 1 15M 6920K sleep 1:49 0.00% perl Server 2 42 processes: 40 sleeping, 1 zombie, 1 on cpu CPU states: 98.8% idle, 0.4% user, 0.8% kernel, 0.0% iowait, 0.0% swap Memory: 1024M real, 31M free, 554M swap in use, 3696M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 5607 webservd 74 29 10 284M 173M sleep 20:14 0.21% webservd 15919 support 1 59 0 4056K 2520K cpu/1 0:08 0.09% top 13138 root 10 59 0 34M 1952K sleep 210:51 0.08% java 13753 root 1 59 0 22M 12M sleep 170:15 0.07% perl 22979 root 33 29 10 112M 7864K sleep 85:07 0.04% webservd 22930 root 1 59 0 3424K 1552K sleep 17:47 0.01% xntpd 22978 root 2 59 0 27M 2296K sleep 10:49 0.00% webservd 13571 root 1 59 0 9400K 5112K sleep 5:52 0.00% perl 5606 root 2 29 10 29M 9056K sleep 0:36 0.00% webservd 15910 support 1 59 0 9128K 2616K sleep 0:00 0.00% sshd 13106 root 1 59 0 82M 3520K sleep 7:47 0.00% perl 13547 root 1 59 0 12M 5528K sleep 6:38 0.00% perl 13518 root 1 59 0 9336K 3792K sleep 6:24 0.00% perl 13399 root 1 56 1 8072K 3616K sleep 5:18 0.00% perl 13557 root 1 53 2 8248K 3624K sleep 5:12 0.00% perl

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • Turn off email notification from abrt (Automatic Bug Reporting Tool)

    - by Banjer
    I'm configuring CentOS 6.2 and have seen a few "[abrt] full crash report" emails. I understand that abrt is useful for creating crash dumps and what not, so I don't want to disable the service, I just would like to stop getting the crash report emails. I probably have to add something to the config file in /etc/abrt/abrt.conf. I can't seem to find anything in my searches. Any idea? Thanks. Edit: Here is my abrt.conf, which is rather simple. [root@myhost~]# cat /etc/abrt/abrt.conf # Enable this if you want abrtd to auto-unpack crashdump tarballs which appear # in this directory (for example, uploaded via ftp, scp etc). # Note: you must ensure that whatever directory you specify here exists # and is writable for abrtd. abrtd will not create it automatically. # #WatchCrashdumpArchiveDir = /var/spool/abrt-upload # Max size for crash storage [MiB] or 0 for unlimited # MaxCrashReportsSize = 1000 # Specify where you want to store coredumps and all files which are needed for # reporting. (default:/var/spool/abrt) # #DumpLocation = /var/spool/abrt And a listing of /etc/abrt: [root@myhost~]# ls -la /etc/abrt total 32 drwxr-xr-x. 3 root root 4096 Apr 13 06:14 . drwxr-xr-x. 97 root root 12288 Apr 13 03:50 .. -rw-r--r--. 1 root root 527 Dec 13 22:50 abrt-action-save-package-data.conf -rw-r--r--. 1 root root 572 Dec 13 22:50 abrt.conf -rw-r--r--. 1 root root 175 Dec 13 22:50 gpg_keys drwxr-xr-x. 2 root root 4096 Apr 13 06:13 plugins [root@myhost~]# ls -la /etc/abrt/plugins/ total 12 drwxr-xr-x. 2 root root 4096 Apr 13 06:13 . drwxr-xr-x. 3 root root 4096 Apr 13 06:14 .. -rw-r--r--. 1 root root 278 Dec 13 22:50 CCpp.conf Actually all of those conf files above are only a few lines and do not mention anything about mail, email, or notifications.

    Read the article

  • TomCat starts, but does not load properly

    - by user37136
    Hey guys, I've been working on this for a day now and still don't know what's wrong. I am essentially building a second environment for our web and app server. I got apache to load up just fine, but tomcat is proving to be difficult. It appears to start and load just fine, but when it comes to loading our application, its just got stuck for 2-5 minutes and then shut down. Here is the log on the original machine where it works fine: 2010-02-12 11:52:40,506 INFO Web application servlet context is initializing... 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobType=[{1,Undefined}, {100,Completion}, {200,Plugging}, {300,R+M}, {400,Workover}, {500,Swab - tubing}, {600,Swab - fluid}] 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobTaskType=[{1,Undefined}, {100,Rod part}, {200,Tubing leak}, {300,Pump change}, {400,Stripping job}, {500,Long stroke}, {600,A/L optimization}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_wellType=[{1,Undefined}, {100,Rod pump}, {200,ESP}, {300,Injector}, {400,PC pump}, {500,Co-Rod}, {600,Flowing}, {700,Storage}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_assetType=[{1,Rig}, {100,Disabled rig}] 2010-02-12 11:52:40,542 DEBUG Servlet context attribute added: select_state=[{AL,Alabama}, {AK,Alaska}, {AZ,Arizona}, {AR,Arkansas}, {CA,California}, {CO,Colorado}, {CT,Connecticut}, {DE,Delaware}, {FL,Florida}, {GA,Georgia}, {HI,Hawaii}, {ID,Idaho}, {IL,Illinois}, {IN,Indiana}, {IA,Iowa}, {KS,Kansas}, {KY,Kentucky}, {LA,Louisiana}, {ME,Maine}, {MD,Maryland}, {MA,Massachusetts}, {MI,Michigan}, {MN,Minnesota}, {MS,Mississippi}, {MO,Missouri}, {MT,Montana}, {NE,Nebraska}, {NV,Nevada}, {NH,New Hampshire}, {NJ,New Jersey}, {NM,New Mexico}, {NY,New York}, {NC,North Carolina}, {ND,North Dakota}, {OH,Ohio}, {OK,Oklahoma}, {OR,Oregon}, {PA,Pennsylvania}, {RI,Rhode Island}, {SC,South Carolina}, {SD,South Dakota}, {TN,Tennessee}, {TX,Texas}, {UT,Utah}, {VT,Vermont}, {VA,Virginia}, {WA,Washington}, {WV,West Virginia}, {WI,Wisconsin}, {WY,Wyoming}, {ACO,Atlantic Coast Offshore}, {FOAK,Federal Offshore Alaska}, {NGOM,Northern Gulf of Mexico}, {PCO,Pacific Coastal Offshore}] 2010-02-12 11:52:40,542 INFO KeyviewContextMonitor.contextInitialized: Loaded drop-down lists:com/key/portal/web/common/lists.properties 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.SERVLET_MAPPING=*.do 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.ACTION_SERVLET=org.apache.struts.action.ActionServlet@155d578 2010-02-12 11:52:41,939 DEBUG Servlet context attribute added: org.apache.struts.action.MODULE=org.apache.struts.config.impl.ModuleConfigImpl@e08e9d 2010-02-12 11:52:41,962 DEBUG Servlet context attribute added: org.apache.struts.action.FORM_BEANS=org.apache.struts.action.ActionFormBeans@b31c3c 2010-02-12 11:52:41,967 DEBUG Servlet context attribute added: org.apache.struts.action.FORWARDS=org.apache.struts.action.ActionForwards@102c646 2010-02-12 11:52:41,973 DEBUG Servlet context attribute added: org.apache.struts.action.MAPPINGS=org.apache.struts.action.ActionMappings@127276a 2010-02-12 11:52:41,974 DEBUG Servlet context attribute added: org.apache.struts.action.MESSAGE=org.apache.struts.util.PropertyMessageResources@18cae13 2010-02-12 11:52:41,984 DEBUG Servlet context attribute added: org.apache.struts.action.PLUG_INS=[Lorg.apache.struts.action.PlugIn;@f875ae 2010-02-12 11:52:46,816 INFO Sucessfully loaded application properties com/key/core/properties/application On my second environment, it didn't execute the last line. I start tomcat with the exact same command line !/bin/ksh export JAVA_HOME=/app/java export CATALINA_HOME=/app/tomcat export CATALINA_BASE=/app/keyview/appserver CATALINA_OPTS=" -Xms128m -Xmx800m -Dapplication.props=com/key/core/properties/application -Dlog4j.configuration=com/key/core/log/log4j.xml -Djava.awt.headless=true -Dlog4j.debug" export CATALINA_OPTS ${CATALINA_HOME}/bin/startup.sh I bolded the line that I think are in error. Thanks

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >