Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 21/114 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Setting up boost filesystem

    - by brit
    I've been having serious problems trying to set up boost. I must have installed and uninstalled the libraries a dozen times. In my most recent attempt, I followed these instructions: Download the zip and unzip it. Get the prebuilt jam executable and unzip it. Put that directory in your path. (Edit Path by using Control Panel...System..Advanced....Environemnt Variables) Open Visual Studio command prompt. Browse to boost directory. Run: bjam "-sTOOLS=vc-8_0" install The main reason I'm trying to use boost is for boost/filesystem yet nothing is working. I know this question has been asked before, but it only resulted in more errors.. Please Help!

    Read the article

  • Download and write .tar.gz files without corruption.

    - by arbales
    I've tried numerous ways of downloading files, specifically .zip and .tar.gz, with Ruby and write them to the disk. I've found that the file appears to be the same as the reference (in size), but the archives refuse to extract. What I'm attempting now is: Thanks! def download_request(url, filePath:path, progressIndicator:progressBar) file = File.open(path, "w+") begin Net::HTTP.get_response URI.parse(url) do |response| if response['Location']!=nil puts 'Direct to: ' + response['Location'] return download_request(response['Location'], filePath:path, progressIndicator:progressBar) end # some stuff response.read_body do |segment| file.write(segment) # some progress stuff. end end ensure file.close end end download_request("http://github.com/jashkenas/coffee-script/tarball/master", filePath:"tarball.tar.gz", progressIndicator:nil)

    Read the article

  • How is fseek() implemented in the filesystem?

    - by pajton
    This is not a pure programming question, however it impacts the performance of programs using fseek(), hence it is important to know how it works. A little disclaimer so that it doesn't get closed. I am wondering how efficient it is to insert data in the middle of the file. Supposing I have a file with 1MB data and then I insert something at the 512KB offset. How efficient would that be compared to appending my data at the end of the file? Just to make the example complete lets say I want to insert 16KB of data. I understand the answer varies depending on the filesystem, however I assume that the techniques used in common filesystems are quite similar and I just want to get the right notion of it.

    Read the article

  • Copying a IO stream results in corruption.

    - by StackedCrooked
    I have a small Mongrel webserver that sends the stdout of a process to a http response: response.start(200) do |head,out| head["Content-Type"] = "text/html" status = POpen4::popen4(command) do |stdout, stderr, stdin, pid| stdin.close() FileUtils.copy_stream(stdout, out) FileUtils.copy_stream(stderr, out) puts "Sent response." end end This works well most of the time, but sometimes characters get duplicated. For example this is what I get from the "man ls" command: LS(1) User Commands LS(1) NNAAMMEE ls - list directory contents SSYYNNOOPPSSIISS llss [_O_P_T_I_O_N]... [_F_I_L_E]... DDEESSCCRRIIPPTTIIOONN List information about the FILEs (the current directory by default). Sort entries alphabetically if none of --ccffttuuvvSSUUXX nor ----ssoorrtt. Mandatory arguments to long options are mandatory for short options For some mysterious reason capital letters get duplicated. Can anyone explain what's going on?

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • ASP.NET Send Image Attachment With Email Without Saving To Filesystem

    - by KGO
    I'm trying to create a form that will send an email with an attached image and am running into some problems. The form I am creating is rather large so I have created a small test form for the purpose of this question. The email will send and the attachment will exist on the email, but the image is corrupt or something as it is not viewable. Also.. I do not want to save the image to the filesystem. You may think it is convoluted to take the image file from the fileupload to a stream, but this is due to the fact that the real form I am working on will allow multiple files to be added through a single fileupload and will be saved in session, thus the images will not be coming from the fileupload control directly on submit. File: TestAttachSend.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeFile="TestAttachSend.aspx.cs" Inherits="TestAttachSend" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <h1>Send Email with Image Attachment</h1> Email Address TO: <asp:TextBox ID="txtEmail" runat="server"></asp:TextBox><br /> Attach JPEG Image: <asp:FileUpload ID="fuImage" runat="server" /><br /> <br /> <asp:Button ID="btnSend" runat="server" Text="Send" onclick="btnSend_Click" /><br /> <br /> <asp:label ID="lblSent" runat="server" text="Image Sent!" Visible="false" EnableViewState="false"></asp:label> </div> </form> </body> </html> File: TestAttachSend.aspx.cs using System; using System.Collections.Generic; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Net.Mail; using System.IO; public partial class TestAttachSend : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void btnSend_Click(object sender, EventArgs e) { if (fuImage.HasFile && fuImage.PostedFile.ContentType == System.Net.Mime.MediaTypeNames.Image.Jpeg) { SmtpClient emailClient = new SmtpClient(); MailMessage EmailMsg = new MailMessage(); EmailMsg.To.Add(txtEmail.Text.Trim()); EmailMsg.From = new MailAddress(txtEmail.Text.Trim()); EmailMsg.Subject = "Attached Image"; EmailMsg.Body = "Image is attached!"; MemoryStream imgStream = new MemoryStream(); System.Drawing.Image img = System.Drawing.Image.FromStream(fuImage.PostedFile.InputStream); string filename = fuImage.PostedFile.FileName; img.Save(imgStream, System.Drawing.Imaging.ImageFormat.Jpeg); EmailMsg..Attachments.Add(new Attachment(imgStream, filename, System.Net.Mime.MediaTypeNames.Image.Jpeg)); emailClient.Send(EmailMsg); lblSent.Visible = true; } } }

    Read the article

  • Error 0x6ba (RPC server is unavailable) when running sfc /scannow on Windows XP in Safe Mode

    - by leeand00
    I think that my mup.sys file is corrupt. I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows XP box: No network provider accepted the given network path. After reading this I attempted to follow the directions by rebooting my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. When I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode What can I do about this? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • Running sfc /scannow provides the error The specific error code is 0x000006ba [The RPC server is una

    - by leeand00
    I think that my mup.sys file is corrupted, I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows xp box: No network provider accepted the given network path. After reading this I attempted to follow the directions by entering my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. Additionally when I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode So what can I do about this exactly? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • FTP corrupting subsequent aiff files from old mac to new mac

    - by eighteyes
    This is a bizarre one. So I am trying to xfer files from an old mac to a new mac, but the old one has no USB, no firewire, just ethernet. So I hooked it up the the network, downloaded fetch and started a FTP server. The first file transfers fine, but all the subsequent ones are corrupted. I can reconnect and transfer a new file and it works, but every file afterwards is corrupted, somehow. I tried forcing binary transfer, but that didn't help. Any ideas? From System 8.6 Osx 10.7.5

    Read the article

  • CHKDSK: What option DOES NOT delete files and turn them into .chk files?

    - by CHKDSKuser
    I had a recent power outage while using my computer, with a 1TB hard drive being directly accessed as the power went out. When the power came back on, and I rebooted my computer, one of my 1TB hard drives would not register with WinXP SP3, and showed a Total Space of 0, and an Available Space of 0. The file system (NTFS) also did not register...every entry for the drive was either blank or zeroed. My assumption is that the file tables were damaged/corrupted because the drive was being directly accessed when the power went out. After doing some research, I ran CHKDSK with whatever default options it runs with (I'm not sure what they are as I didn't see them displayed). Upon completion of CHKDSK, the drive registered with WinXP as a 1TB hard drive, with an accurately-reflected amount of available space. But CHKDSK also deleted about 16GB of files from their original directories, and changed them all into sequentially-named *.chk files. My question is how can CHKDSK be run in a situation like mine where the file tables needed to be restored, but without having CHKDSK delete any files from their original directories, even if they may be damaged/corrupt? I'd simply like to be able to run CHKDSK and have it restore the file tables, and repair bad sector damage, as it did, but not have it do anything else such as delete files and convert them to CHK files. Any ideas? Or is there a CHKDSK alternative that can perform the same functions without the file deletions?

    Read the article

  • VMFS recovery - how do you proceed?

    - by ToreTrygg
    The more I look into ESX the more often i have to handle cases where the partition table of a disk with a VMFS volume gets corrupted. The reasons for this can be * idiot user * failed update * power failure * .... I guess you guys must already have something like a usual procedure on how to work through this cases. I am especially interested in a straight fast way to find out if the VMFS volume itself is corrupted beyond repair. So far I use very time consuming attempts with scans with Testdisk and similar tools. Do you have better / faster ways ?

    Read the article

  • System Information (msinfo32.exe) Can't Collect Information

    - by ptanne
    I have Windows XP Pro, service pack 1, IE 6 and 32GB of free space, 75GB total. I have had nothing but trouble after trying to install service pack 2 even though I used System Restore. The installation was incomplete and my computer has never been the same. I attempted to install sp2 four or five times and sp3 once, always with the same result. I've tried reinstalling XP Pro but that didn't fix the problem. My XP Pro disk now has a scratch on it and refuses to work. Dell would not replace it stating that my computer was out of warranty. I'm currently trying Reimage which is supposed to return a computer to the original configuration and replace missing or damaged files. Believe it or not, Ripley, it stops in the middle of the operation and, so far, the Reimage techs haven't been able to figure out why. Of the many problems that I still have is that System Information can't collect information. The Help and Support sections that display system info also don't work. Is there some way that I can fix this? I can't afford to throw my computer away, yet. Thank you for listening, Pam Galvin

    Read the article

  • mysql.proc has gone corrupt. How can I fix it?

    - by Metalcoder
    I have a server running Debian 5.0, and MySQL. Suddendly, MySQL stopped working, and after many attempts to fix it, I decided to reinstall it. I installed MySQL 5.1.63, and when started it goes to safe mode. I made some typing, and when I executed mysql_upgrade as root, it complained: ... Running 'mysql_fix_privilege_tables'... ERROR 1548 (HY000) at line 1111: Cannot load from mysql.proc. The table is probably corrupted ERROR 1064 (42000) at line 1112: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'sqlstate 'HY000' set message_text='Unexpected content found in the performance_s' at line 1 ERROR 1548 (HY000) at line 1125: Cannot load from mysql.proc. The table is probably corrupted FATAL ERROR: Upgrade failed I checked the mysql.proc table, and it's comment column was slightly different from my backup. -- My backup says: `comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '', -- But it were: `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, So, I restored my mysql database backup, and now they all match, but mysql_upgrade still trigger the same errors. I also tried do check and repair the mysql.proc table, but got no success.

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • Immutable hard links on ext3/4?

    - by shovas
    In my research on file versioning at the fs level, snapshotting, and related ideas, I took a look at hard-links and exactly what they are and how they behave. Using rsync you can get a pretty slick poor man's snapshotting system up and running on file systems that don't natively support it. But, can you get immutable hard links on ext3/4 or any other file systems for that matter? My definition for immutable hard link is: A hard link which, when changed on one location, becomes a regular copy and no longer a hard link. I would like this because it would enable snapshotting use of the source data to link against instead of a copy of the data (in the case of the rsync snapshotting technique). I have gigabytes of data that can't be duplicated due to space restrictions but I have enough room if I can intelligently snapshot individual changed files with the rest linked to the source not a copy. Given all that, is there some other technique, feature or technology I'm really looking for?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored my OSX today by copying the system over from a backup. Most things seem to be working, but every single GIT repo gives pretty much the same error fatal: object 03b45161eb27228914e690e032ca8009358e9588 is corrupted I have tried chowning, doing everything as sudo or root... I have no idea what to try next. This would be a normal git question except that it's on many repos. Ideas? Note: I'm using git 1.7.0.3 and I was probably using 1.7.0 before. Edit: Tried with 1.7.0.2 and it made no difference. Edit: Even when copying any of the repos I get this strange message cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/Pickers/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • IIS can't serve files with accents and spaces in file names

    - by pho3nix
    My server recently stopped serve files with accents and spaces in filename. and example [http://jf-monteabraao.pt/UserFiles/File/OP%C3%87%C3%95ES%20DO%20PLANO%20-%20OR%C3%87AMENTO%202010.pdf][1] I not installed anything except urlscan but i seeing urlscan.ini and don't find any reference to this rule. anyone have idea whats happen?

    Read the article

  • Windows Server 2008 repair logonui.exe

    - by Josh R
    I'm fairly certain some data on my server got corrupted and from a dialog box that popped up I think one of the things that got corrupted is logonui.exe I think this because while the server goes through BIOS and the windows loading splash screen, when it finished that it goes to a black screen with just a mouse cursor that I can move around. But most of the services come back up, for example: DNS, DHCP, DFS, and the print service all load up while it's sitting on the black screen. Is there anything I can do to repair the logon screen? Maybe copy a file off another copy of Server 2008, get something off the original media (which I have) or bring the system back to a previous restore point? Thanks! I'm in the weeds right now...

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >