Search Results

Search found 8937 results on 358 pages for 'disk defragmenting'.

Page 56/358 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

  • How to access files on a USB-connected NTFS disk removed from a Win7 notebook?

    - by yosh m
    My daughter seems to have fried her motherboard in her Lenovo Notebook. The disk seems to be fine. I removed the disk and used a universal disk-to-USB kit to attach it to another computer. The disk is recognized fine and I can peruse it in Windows Explorer. The problem is that the files she would like to recover from it are located in places that Windows refuses to let me access. When I try, for example, to enter the directory "Documents and Settings" it gives me an "Access is denied" error. Same thing when I try to go into the various User directories and other locations. I thought to try creating a Ghost image & retrieve the files from that, but Ghost seems to croak when I try to run it - apparently it doesn't like accessing the disk via a USB connection (even though I've told it to install the drivers for USB). Any other ideas about how to get to the files I need, either through Windows or perhaps some other OS that I could boot from a CD that can read an NTFS disk? Thanks, Yosh

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • Bad motherboard / controller / HDs?

    - by quidpro
    On a leased server, I am running into some timing issues with an application that requires precise timing. Server is a Dual Xeon E5410 running on a Supermicro X7DVL-3 motherboard under CentOs 5.5 x64. The application I am running is timer sensitive and keeps sensing drift whether under load or at idle, but especially under load. I did some investigating with atop and dd and found some mind-blowing numbers. Mind you, I am no Linux guru but something sure seems out of whack. I ran: dd bs=4096 if=/dev/zero of=/bigtestfile to generate disk activity. Regardless whether I wrote it to sda or sdb my DSK value in atop would go over 100%, at one time peaking at 1700%. Again it does not matter if I am writing to sda or sdb. DSK | sdb | busy 675% | read 0 | write 110 | avio 78 ms | Here are the smartctl outputs: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 165 165 021 Pre-fail Always - 2750 4 Start_Stop_Count 0x0032 100 100 040 Old_age Always - 21 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000a 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 065 065 000 Old_age Always - 25831 10 Spin_Retry_Count 0x0012 100 253 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 116 093 000 Old_age Always - 27 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0012 200 200 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x000a 200 253 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 # smartctl -A /dev/sdb smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 180 180 021 Pre-fail Always - 3958 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 9 Power_On_Hours 0x0032 068 068 000 Old_age Always - 24087 10 Spin_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 122 096 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0009 200 200 051 Pre-fail Offline - 0 Any idea what's wrong here? Bad motherboard? It would seem rare that both drives are going bad (smartctl says they PASS_, so it leaves the mobo as the culprit in my eyes.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Converting an EC2 AMI to vmdk image

    - by Reed G. Law
    I've come quite close to getting Amazon Linux to boot inside VirtualBox, thanks to this answer and these websites. A quick overview of the steps I've taken: Launch EC2 instance with Amazon Linux 2011.09 64-bit AMI dd the contents of the EBS volume over ssh to a local image file. Mount the image file as a loopback device and then to a local mount point. Create a new empty disk image file, partition with an offset for a bootloader, and create an ext4 filesystem. Mount the new image's partition and copy everything from the EC2 image. Install grub (using Ubuntu's grub-legacy-ec2 package, not grub2). Convert the image file to vmdk using qemu-img. Create a new VirtualBox VM with the vmdk. Now the VM boots, grub loads, and the kernel is found. But it fails when it tries to mount the root device: dracut Warning: No root device "block:/dev/xvda1" found dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. dracut Warning: Signal caught! dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.35.14-107.1.39.amzn1.x86_64 #1 I have tried changing /boot/grub/menu.lst to find the root device by label and UUID, but nothing works. I'm guessing the xen kernel is not compatible with VirtualBox. The reasoning behind all this effort is to make a Vagrant box that is as close to possible as the production enviroment, so deploys can be tested locally. I know it's cheap to do test runs on EC2, but poor connectivity often ruins the experience. Plus it would be really nice to have a virtual machine with the production environment so that co-workers don't have to install everything under the sun just to get up and running with app development. If I were to try running a different kernel, what kernel could I get to be as close as possible to Amazon Linux 2011.09?

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Disk Search / Sort Algorithm

    - by AlgoMan
    Given a Range of numbers say 1 to 10,000, Input is in random order. Constraint: At any point only 1000 numbers can be loaded to memory. Assumption: Assuming unique numbers. I propose the following efficient , "When-Required-sort Algorithm". We write the numbers into files which are designated to hold particular range of numbers. For example, File1 will have 0 - 999 , File2 will have 1000 - 1999 and so on in random order. If a particular number which is say "2535" is being searched for then we know that the number is in the file3 (Binary search over range to find the file). Then file3 is loaded to memory and sorted using say Quick sort (which is optimized to add insertion sort when the array size is small ) and then we search the number in this sorted array using Binary search. And when search is done we write back the sorted file. So in long run all the numbers will be sorted. Please comment on this proposal.

    Read the article

  • StringTemplate: Loading a Template from disk?

    - by Amitabh
    I am using StringTemplate in c# and following code to load a template from a subdirectory of my application. StringTemplateGroup group = new StringTemplateGroup("myGroup", "/tmp"); StringTemplate query = group.GetInstanceOf("Sample"); query.SetAttribute("column", "name"); Console.WriteLine(query); I have a template file Sample.st in the tmp directory of my application. I am getting the following error. Unhandled Exception: System.ArgumentException: Can't find template Sample.st; group hierarchy is [myGroup] Does anyone know what is wrong here?

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • Removing multiple files from a Git repo that have already been deleted from disk

    - by Codebeef
    I have a Git repo that I have deleted four files from using rm (not git rm), and my Git status looks like this: # deleted: file1.txt # deleted: file2.txt # deleted: file3.txt # deleted: file4.txt How do I remove these files from Git without having to manually go through and add each file like this: git rm file1 file2 file3 file4 Ideally, I'm looking for something that works in the same way that git add . does, if that's possible.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Referencing file on disk from NSManagedObject

    - by Kamchatka
    Hello, What would be the best way to name a file associated to a NSManagedObject. The NSManagedObject will hold the URL to this file. But I need to create a unique filename for my file. Is there some kind of autoincrement id that I could use? Should I use mktemp (but it's not a temporary file) or try to convert the NSManagedObjectId to a filename? but I fear there will be special characters which might cause problem. What would you suggest?

    Read the article

  • While saving a PNG image using NSData writetofile saves corrupted data on the iphone disk

    - by jAmi
    I have a number of images (PNG,GIF and JPG) in my Application Resource Bundle. I want some images to be saved in my Documents Directory so i use : imgPath=[documentsDirectoryPath stringByAppendingPathComponent:@"myImage.png"]; if (![fileMgr fileExistsAtPath:imgPath]) { [[fileMgr contentsAtPath:[[NSBundle mainBundle] pathForResource:@"myImage"ofType:@"png"]] writeToFile:imgPath atomically:NO]; } This saves an Image file on my desired Path but this file has an Extra 300 bytes (of maybe junk data) in it which results in a corrupted image... Am i doing something wrong here? This works in the simulator but on the real device the image has some extra 300 bytes. Also a GIF image gets copied nicely and works but this problem occurs for PNG image.

    Read the article

  • Has anybody used the WB B-tree library?

    - by Chris B
    I stumbled across the WB on-disk B-tree library: http://people.csail.mit.edu/jaffer/WB It seems like it could be useful for my purposes (swapping data to disk during very large statistical calculations that do not fit in memory), but I was wondering how stable it is. Reading the manual, it seems worringly 'researchy' - there are sections labelled [NOT IMPLEMENTED] etc. But maybe the manual is just out-of-date. So, is this library useable? Am I better off looking at Tokyo Cabinet, MemcacheDB, etc.? By the way I am working in Java.

    Read the article

  • Arraylist can't compare objects after they are loaded from disk

    - by Zka
    To make it easy, lets say I have an arraylist allBooks containing class "books" and an arraylist someBooks containing some but not all of the "books". Using contains() method worked fine when I wanted to see if a book from one arraylist was also contained in another. The problem was that this isn't working anymore when I save both of the Arraylists to a .bin file and load them back once the program restarts. Doing the same test as before, the contains() returns false even if the compared objects are the same (have the same info inside). I solved it by overloading the equals method and it works fine, but I want to know why did this happen?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >