Search Results

Search found 2728 results on 110 pages for 'volume'.

Page 92/110 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Netbook thinks it is a desktop

    - by Narcolapser
    Question: Are, and if so what, there packages for download that I can get netbook to understand it is not a desktop and that it is a netbook. Info: I'm running an Acer Aspire One with ubuntu desktop 9.10. I tried Ubuntu Netbook Remix first but it has graphics issues with the aspire one. So I changed to Ubuntu Desktop. It was the only distro (after debian, centOS, Fedora, and Knoppix all failed me) that I managed to get working. The only thing is that it is having issues doing things that a netbook/laptop should be doing. most notably is that it will run it's battery dead if I close the screen and throw it into my back pack. It seems to just stay fully on and runs it's self to death. also it will lock up some times if I close the screen and come back to it 10 or 20 minutes later. It also won't retain volume settings when I reboot, as well as screen brightness. and just a couple of other things that I can't quite put my finger on, but just seem amiss. like I said, Essentially my netbook thinks it is a desktop, how can I fix this? ~N

    Read the article

  • Unable to write DVD-R(Blank DVD's)

    - by FrozenKing
    I have a problem in dvd drive i.e. It can read CD/DVD and can write CD and all CD/DVD-RW but cannot write DVD DVD drive model is SH-S203B Samsung; I also have a log file created by nero burning rom 11. Actually the fact is no Blank DVD's are being read in my dvd drive only previously written dvd's can be read! Is this the problem of OS or should I try cleaning the dvd drive or my DVD drive is 4yrs old so is it going to spoil now, since it is showing this type of symptoms! OS = WinXP AV = KIS 2012 DVD Drive = Samsung SH-S203B (Also tried latest firmware and downgrade versions also) IA32 Nero Version: 11.2.4.100 Internal Version: 11,2,4,100 Recorder: <TSSTcorp CDDVDW SH-S203B>Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 Drive buffer : 2048kB Bus Type : via Inquiry data CD-ROM: <TSSTcorp CDDVDW SH-S203B >Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 18:58:10 #37 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 00 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:10 #38 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 0 to 14 <Padding> 18:58:19 #39 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 10 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:19 #40 SectorVerify 21 File Cdrdrv.cpp, Line 12057 Read error at sector 15 <Virtual Multisession Info> 18:58:19 #41 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 16 to 18 <Volume Structure Descriptor Sequence> 18:58:28 #42 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 20 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • SSRS2008R2 report times out, but the underlying query executes in the Management Studio

    - by Matthew Belk
    A customer of mine recently moved servers and the new server has SQL2008R2. His old server was SQL2005. The new server has substantially better CPU, RAM, and disk performance than the old, but several reports time out while executing. When I run the underlying query in the SQL Management Studio, the query executes in sub-second time. The exact error message returned via the Report Manager UI is: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It must be noted that this database is not just analytical; it's also fairly transactional, although the transaction volume is not exceptionally high. What can I do to improve the performance of the SSRS query engine? Are there settings in the data source I can adjust, or in the SSRS config files?

    Read the article

  • Cannot login to ISCSI Target - hangs after sending login details

    - by Frank
    I have an ISCSI target volume, to which i am trying to connect using CentOS Linux server. Everything works fine, but cannot its stuck at login. Here are the steps i am performing: [root@neon ~]# iscsiadm -m node -l iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session20 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session21 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session22 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session23 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session30 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session31 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session78 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session79 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session80 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session81 Logging in to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) After this step, its stucks, waits for some time and then gives this output: Logging in to [iface: iface1, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) iscsiadm: Could not login to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260]. My iscsi.conf is this: node.startup = automatic node.session.timeo.replacement_timeout = 15 # default 120; RedHat recommended node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 20 node.session.initial_login_retry_max = 8 # default 8; Dell recommended node.session.cmds_max = 1024 # default 128; Equallogic recommended node.session.queue_depth = 32 # default 32; Equallogic recommended node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.iscsi.FastAbort = Yes Also, in access control, i have given full access to Any IP, Any CHAP user and fixed iscsi initiator name. With same access level, all other volumes on rest of servers are working, except this one.

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • ESX hosts lose connectivity with iSCSI SAN LUNs

    - by Themist
    I've been experiencing this issue for a couple of months now where my ESX hosts lose connectivity with my iSCSI SAN vmfs volumes. As a results the ESX hosts enter a nonresponsive mode the associated VMs disconnect and the only remedy is to reboot the host. This issue happens randomly . I have escalated this issue with VMWare but I haven't had any solution to the issue yet. I see no errors on my switches and there are no hardware issues as well. My SAN infrastucture is solid and there are 2 paths for every vmfs volume. Did anybody else experienced a similar issue? edit: Here are some more details: The iSCSI SAN software is Datacore Sanmelody 2.0.4.2 running on 2 HP Proliant G5 servers. The storage attached to each of the servers is an HP MSA70 and all the iSCSI SAN Volumes that are presented to my 4 ESX hosts are mirrored. I have two iSCSI swithces HP Procurve 1800G-24 that are trunked together. My SANLELODY servers are using NC360T NICs. I team two NICs and have one cable connecting to each iSCSi switch. Each ESX server uses two NICs as well for the iSCSI Network.

    Read the article

  • Speakers silent, headphones work in Ubuntu 9.04

    - by CarlF
    I'm running Ubuntu 9.04. Worked fine for months, then I rebooted yesterday after weeks of continuous operation. Now audio won't play through the speakers. The USB headset works fine, but the Conexant audio (CX20549) does not. Weirdly, it thinks it's playing. pavumeter shows appropriate levels, volume looks OK in alsamixer, but no sound. I did find this page: http://www.eugeneteplitsky.com/fixing-silent-pulseaudio-in-ubuntu-9-04/ Unfortunately the advice there doesn't help me. For one thing, the syntax for the alsa-base.conf file is apparently not actually documented anywhere. For another, my chipset isn't listed in the kernel.org docs he links to! EDIT: would upgrading to 9.10 help? Is there a major change in the audio subsystem between 9.04 and 9.10? Any suggestions? EDIT 2: This is stranger than I thought. Sound works normally in Xine, but is silent in Audacity, VLC and mplayer. What the?

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • Terrible noises from subwoofer of ACER Aspire 6930 with Realtek sound chip

    - by OneWorld
    After approximately 5-15 min of listening to music my subwoofer begins to make terrible noises. He's just "coughing". That began after 6 months I had this computer. Now I found out, that I can temporarily fix this problem by "restarting" the audio stream of the application that plays music. For example reloading last.fm page (reloads the flash file). Another way to reset the audio playback is switching the speaker configuration shown below in the screenshot. According to many posts on the internet like http://www.tomshardware.co.uk/forum/52918-20-acer-aspire-6935g-speaker-problem ACER support isn't any help Exchanging hardware doesn't fix the problem Even the later models have this problem Turning off the volume of the subwoofer is not an option to me. I still have warranty (I bought an extension of one year). I already tried about 15 versions of the Realtek driver with no success. I am not sure but MAYBE the problem did not occur on the original windows vista that was shipped with this computer. However, I removed the original windows for good reasons (english). What do you suggest me? Did anyone fix this problem? Maybe by writing a script which resets the audio streams every 5 minutes? Shall I take the effort to deal with the acer support until they give me another model? (I won't have a computer than for a longer time, will spend money on telephone hotlines (1,30 EUR / min)......) Here are additional infos, if they are any help: Windows 7 64 Bit (Original was Windows Vista Home Premium 32 Bit) All specs Audio driver version:

    Read the article

  • Find out when a system went down?

    - by Clinton Blackmore
    I have a Mac OS X 10.5 server, with a RAID set in it, that went down due to a power outage on Thursday, and the machine is not happily booting right now*. It is possible to find out when the machine went down, while not booted off the internal drive? (I'm booted off an external drive, waiting for the RAID sets to initialize.) Normally, I'd run last. The man page doesn't indicate that I can run it against a different startup volume. It looks possible to parse /var/log/utmpx, but I don't think it'd be worthwhile to try to do that from scratch for this one-off problem. * I'm still trying to figure out why it isn't happy, and may ask a follow-up question. Right now I can see that UserNotificationCenter crashed repeatedly early Thursday morning, and that securityd, mdworker, and ARDAgent crash shortly after startup [I think -- I want to verify when the box went up and down]. The login window does not come up right (I think it is crashing or not able to cope with a dead securityd). The box is supposed to be set to go down when the UPS tells it power is out; at the moment, I'm wondering if it went down, and turned back on multiple times! I sure hope not.

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • Exchange backup verification shows no files

    - by Olaf
    [SBS2003SP2] If i read the exhange log it shows the backup contains no files just folders and the total size seems to be Ok. I i try to restore the folders are empty... But the 14 files that where backupped dissapeared in the verification log?! Other backups on the same medium turned out to be fine. Any idea what's wrong here? This is my log: Backup Status Operation: Backup Active backup destination: File Media name: "testbackup.bkf created 2-6-2010 at 11:25" Volume shadow copy creation: Attempt 1. Backup of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Media name: "testbackup.bkf created 2-6-2010 at 11:25" Backup Type: Normal Backup started on 2-6-2010 at 11:26. Backup completed on 2-6-2010 at 12:21. Directories: 4 Files: 14 Bytes: 26.842.932.104 Time: 55 minutes and 38 seconds Verify Status Operation: Verify After Backup Active backup destination: File Active backup destination: \backup\Server1\Backup Files\testbackup.bkf Verify of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Verify started on 2-6-2010 at 12:21. Verify completed on 2-6-2010 at 12:47. Directories: 4 Files: 0 Different: 0 Bytes: 26.842.932.104 Time: 25 minutes and 46 seconds

    Read the article

  • LVM and cloning HDs

    - by jcea
    Using Linux, I have several backup levels. One of them is a periodical sector by sector copy (using dd) of my laptop harddisk to an external USB disk. Yes, I have other backups too, like remote rsync. This approach (the disk dd) is OK when cloning a HDD with no LVM volumes, since I can plug the external disk anytime and mount the partitions simply mounting /dev/sdb* instead of /dev/sda*. Trivial and handy. Today I moved ALL my harddisk (including the /boot) to LVM. Everything works fine. I will stress it for a couple of days, and then I will do a sector by sector copy to my external harddisk. Now I have a problem, I guess. If in the future I plug the external USB HDD to recover any file, the OS will detect a duplicate LVM configuration, with the same name and the same UUID. Even doing a vgrename (which LVM would be renamed, the internal HDD or the external HDD?), the cloned UUID will not change. Is there any command to change name and UUID? Ideally I would clone the HDD and then change the LVM group name and its UUID, but I don't know how to do it. Another related issue would be... In the past I have booted my laptop using the external disk, using the BIOS boot menu and changing GRUB entries manually to boot from /dev/sdb instead of /dev/sda. But now my current GRUB configuration boots directly from a LVM logical volume, something like: set root='(LVM-root)' in my grub.cfg. So... What is going to happen with duplicated volumes? Any suggestion? I guess I could repartition my external harddisk and change backup strategy from dd to rsync, but this disk has windows installed too, and I really would like to have a physical "real" copy.

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Distortion problem with Creative audio equalizer

    - by e-t172
    Hi, I have a problem with the Creative Console EQ, I don't know if it's fixable or not (is the EQ software or hardware on these cards?). Basically, I have enormous distortion with certain sounds in the 30 - 125 hz range. When this happens I get some sort of "frrzzzz" (sorry, I'm french and don't really know the correct english word for that) on top of the original sound. I have a Sound Blaster Audigy SE. I'm using the Daniel_K drivers, on Windows 7 Profesionnal x64. All the effects are disabled except EQ. Steps to reproduce Put the card in 24bit/96khz mode. The problem is also present with 16bit/48khz but seems to be less audible. In the Creative Console, use the following EQ: (full size) Play this sound at a reasonably high volume. You should hear distortion on the two "booms". Especially the second one. Disable Creative EQ. Play the sound in an application with an integrated EQ (e.g. foobar2000, ffdshow) using the same EQ parameters. There is no distortion. Conclusion: the Creative EQ is broken. Is anyone having the same problem? I'm also interested in the results with other Creative cards or even other brands soundcards with a similar EQ feature.

    Read the article

  • running chkdsk /F on a large mounted NTFS image file gets BSOD (Windows Vista)

    - by Citizentools
    Using ddrescue, I've created ISO files from the C: and D: drives on my Windows XP laptop's harddisk (after the laptop stopped booting and chkdsk etc. wouldn't fix it). I was able to mount the 60 GB D.iso file use OSFmount, and successfully recreated the D: drive on another laptop. The C.iso image is more problematic. ddrescue left about 3mb unrecovered of 85 GB total, after multiple passes (no big worries about this) and I'm able to mount it with OSFmount on a Windows Vista laptop. However, when I run chkdsk /F /V on the mounted drive (which was mounted as H:), I consistently get a blue screen (BSOD). CHKDSK makes it through the first three passes, including index fixing and security descriptor fixes, without errors, but triggers a BSOD when it attempts to fix the volume records or bitmap If I attempt to fix the drive by clicking on Properties-Tools-Error checking-Check Now-Automatically fix file system errors, I get an alert box reading "WIndows was unable to complete the disk check." I'd try a tool other than OSFMount, but it's the only thing I've found so far that will mount large ISO files, and it has worked for me up to now in this process. [Update 2011-11-13 18:41 EST] Just ran the same process using the original Windows XP laptop, with a different internal drive, and chkdsk worked like a champ. So the question is still interesting, but decidedly less urgent.

    Read the article

  • Controller Error: Do I need to worry?

    - by Kryten
    I have a HP Pavillion dv5224ea Laptop with Windows 7 on it. Recently I discovered a Error in Event Viewer: The driver detected a controller error on \Device\Ide\IdePort1. (more details): - System - Provider [ Name] atapi - EventID 11 [ Qualifiers] 49156 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2010-03-07T12:43:07.090197600Z EventRecordID 30198 Channel System Computer Alistair-Win7 Security - EventData \Device\Ide\IdePort1 0000100001000000000000000B0004C002000000850100C00000000000000000000000000000000000000000000000000000000004100000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00100000 00000001 00000000 C004000B 0008: 00000002 C0000185 00000000 00000000 0010: 00000000 00000000 00000000 00000000 0018: 00000000 00001004 In Bytes 0000: 00 00 10 00 01 00 00 00 ........ 0008: 00 00 00 00 0B 00 04 C0 .......À 0010: 02 00 00 00 85 01 00 C0 ......À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 00 00 00 00 04 10 00 00 ........ Event Viewer is recording A LOT of these errors (sometimes 13, one after the other!). Do I need to worry? What does this error mean? What device could "\Device\Ide\IdePort1" be? What is a ATAPI Error? Do I need to re-install Windows? I generally find the occurs when I try to backup my machine (using Windows Backup) or when using a program that uses Volume Shadow Copy. I have run "sfc", no problems. There are no Device Errors in Device Manager. I have also run "vssadmin list writers", no problems. Whats going on??? Would it be a good idea to re-install Windows 7?

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >