Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 17/114 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • "Can't find root filesystem / error mounting /dev/root" when booting to new kernel

    - by salparadise
    I am trying to upgrade my kernel from 2.6.18-274 to 2.6.39 for some wireless card drivers. When I boot into the new kernel I get the "Can't find root filesystem / error mounting /dev/root" googling led me to this page http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot From what I am reading seems to be an issue with a driver for my SATA controller or HD, but I can't find what option I need to add to the kernel. Doing a diff from the old initrd to the new one gives me the following: root-> diff /tmp/kafter /tmp/kbefore 6a7,8 > lib/dm-message.ko > lib/dm-region_hash.ko 8a11 > lib/dm-raid45.ko 13d15 < lib/dm-region-hash.ko 16a19 > lib/dm-mem-cache.ko Do I need any of those? not sure if I would need dm-raid45.ko as I am not running a raid. I have the same SATA and IDE options configured for both kernels so not sure what else to look for, any help is appreciated. Additionally here is the HW info: 00:1f.2 IDE interface: Intel Corporation 82801FB/FW (ICH6/ICH6W) SATA Controller (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Hewlett-Packard Company Unknown device 3006 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 233 I/O ports at 1818 [size=8] I/O ports at 1830 [size=4] I/O ports at 1820 [size=8] I/O ports at 1834 [size=4] I/O ports at 14f0 [size=16] Capabilities: [70] Power Management version 2 root-> smartctl -a /dev/sda ... === START OF INFORMATION SECTION === Device Model: WDC WD5000AADS-00S9B0

    Read the article

  • IIS7 can't read web.config on shared Mac filesystem

    - by RobG
    I'm running a VirtualBox virtualized Windows 2008 Server on my Mac, just finished setting it up today. On it, I have SQL Server 2008, IIS and ColdFusion 9. I want to serve websites from my Mac filesystem (for development purposes). So I created a new website in IIS and pointed it at the appropriate path using a UNC path: \vboxsvr\rob\Sites\testsite, which contains the ColdFusion code and a web.config file. When I attempt to modify the file at all, or view the site in a web browser, I get an error: HTTP 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. I did some Googling, and found several similar problems, but nothing exactly like I have. The closest one seemed to indicate permissions. So I recreated the site and set it up to allow the Administrator (in Windows) to access the stuff. That didn't help. I can read/modify the files just fine from within Windows, but IIS itself can't seem to do it. What do I need to do to fix this? Thanks!

    Read the article

  • LDAP groups not applying to filesystem permissions

    - by BeepDog
    System is ArchLinux, and I'm using nss-pam-ldapd (0.8.13-4) to connect myself to ldap. I've got my users and some groups in LDAP: [root@kain tmp]# getent group <localgroups snipped> dkowis:*:10000: mp3s:*:15000:rkowis,dkowis music:*:15002:rkowis,dkowis video:*:15003:transmission,rkowis,dkowis,sickbeard software:*:15004:rkowis,dkowis pictures:*:15005:rkowis,dkowis budget:*:15006:rkowis,dkowis rkowis:*:10001: And I have some directories that are setgid video so that the video group stays, and they're configured g=rwx so that members of the video group can write to them: [root@kain video]# ls -ld /srv/video drwxrwxr-x 8 root video 208 Oct 19 20:49 /srv/video However, members of that group, say dkowis cannot write into that directory: [root@kain video]# groups dkowis mp3s music video software pictures dkowis Total number of groups that dkowis is in is like 7, I redacted a few here. [dkowis@kain wat]$ cd /srv/video [dkowis@kain video]$ touch something touch: cannot touch 'something': Permission denied [dkowis@kain video]$ groups dkowis mp3s music video software pictures I'm at a loss as to why my groups show up in getent groups, but my filesystem permissions are not being respected. I've tried making a new directory in /tmp and setting it's group permissions to rwx, and then trying to write a file in there, it doesn't work. The only time it does work is if I open it wide up allowing o=rwx. That's obviously not what I want, and I'm not able to figure out what my missing piece is. Thanks in advance.

    Read the article

  • Kernel panic error

    - by cioby23
    We have a dedicated server with software RAID1 and one of the disk failed recently. The disk was replaced but after rebuilding the array and rebooting the server freezes with a Kernel Panic message No filesystem could mount root, tried: reiserfs ext3 ext2 cramfs msdos vfat iso9660 romfs fuseblk xfs Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,1) The filesystem on both disks is ext4. It seems the kernel can't load ext4 support. Is there any way to add ext4 support or do I need to recompile a new kernel again ? Interesting point that before disk replacement all was fine. The kernel is a stock kernel bzImage-2.6.34.6-xxxx-grs-ipv6-64 from our provider OVH Kind regards,

    Read the article

  • IE attachEvent on object tag causes memory corruption

    - by larswa
    I've an ActiveX Control within an embedded IE8 HTML page that has the following event MessageReceived([in] BSTR srcWindowId, [in] BSTR json). On Windows the event is registered with OCX.attachEvent("MessageReceived", onMessageReceivedFunc). Following code fires the event in the HTML page. HRESULT Fire_MessageReceived(BSTR id, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; CComVariant* pvars = new CComVariant[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1] = id; pvars[0] = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; // -> Memory Corruption here! return varResult.scode; } After I enabled gflags.exe with application verifier, the following strange behaviour occur: After Invoke() that is executing the JavaScript callback, the BSTR from pvars[1] is copied to pvars[0] for some unknown reason!? The delete[] of pvars causes a double free of the same string then which ends in a heap corruption. Does anybody has an idea whats going on here? Is this a IE bug or is there a trick within the OCX Implementation that I'm missing? If I use the tag like: <script for="OCX" event="MessageReceived(id, json)" language="JavaScript" type="text/javascript"> window.onMessageReceivedFunc(windowId, json); </script> ... the strange copy operation does not occur. The following code also seem to be ok due to the fact that the caller of Fire_MessageReceived() is responsible for freeing the BSTRs. HRESULT Fire_MessageReceived(BSTR srcWindowId, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; VARIANT pvars[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1].vt = VT_BSTR; pvars[1].bstrVal = srcWindowId; pvars[0].vt = VT_BSTR; pvars[0].bstrVal = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; return varResult.scode; } Thanks!

    Read the article

  • FileSystem.GetFiles() + UnauthorizedAccessException error?

    - by OverTheRainbow
    Hello, It seems like FileSystem.GetFiles() is unable to recover from the UnauthorizedAccessException exception that .Net triggers when trying to access an off-limit directory. In this case, does it mean this class/method isn't useful when scanning a whole drive and I should use some other solution (in which case: Which one?)? Here's some code to show the issue: Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork Dim drive As DriveInfo Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String 'Scan all fixed-drives for MyFiles.* For Each drive In DriveInfo.GetDrives() If drive.DriveType = DriveType.Fixed Then Try 'How to handle "Access to the path 'C:\System Volume Information' is denied." error? filelist = My.Computer.FileSystem.GetFiles(drive.ToString, FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") For Each filepath In filelist DataGridView1.Rows.Add(filepath.ToString, "temp") 'Trigger ProgressChanged() event bgrLongProcess.ReportProgress(0, filepath) Next filepath Catch Ex As UnauthorizedAccessException 'How to ignore this directory and move on? End Try End If Next drive End Sub Thank you. Edit: What about using a Try/Catch just to have GetFiles() fill the array, ignore the exception and just resume? Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork 'Do lengthy stuff here Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String filelist = Nothing Try filelist = My.Computer.FileSystem.GetFiles("C:\", FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") Catch ex As UnauthorizedAccessException 'How to just ignore this off-limit directory and resume searching? End Try 'Object reference not set to an instance of an object For Each filepath In filelist bgrLongProcess.ReportProgress(0, filepath) Next filepath End Sub

    Read the article

  • FDs not closed in FUSE filesystem

    - by cor
    Hi, I have a problem while implementing a fuse filesystem in python. for now i just have a proxy filesystem, exactly like a mount --bind would be. But, any file created, opened, or read on my filesystem is not released (the corresponding FD is not closed) Here is an example : yume% ./ProxyFs.py `pwd`/test yume% cd test yume% ls mdr yume% echo test test yume% ls mdr test yume% ps auxwww | grep python cor 22822 0.0 0.0 43596 4696 ? Ssl 12:57 0:00 python ./ProxyFs.py /home/cor/esl/proxyfs/test cor 22873 0.0 0.0 6352 812 pts/1 S+ 12:58 0:00 grep python yume% ls -l /proc/22822/fd total 0 lrwx------ 1 cor cor 64 2010-05-27 12:58 0 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 1 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 2 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 3 - /dev/fuse l-wx------ 1 cor cor 64 2010-05-27 12:58 4 - /home/cor/test/test yume% Does anyone have a solution to actually really close the fds of the file I use in my fs ? I'm pretty sure it's a mistake in the implementation of the open, read, write hooks but i'm stucked... Let me know if you need more details ! Thanks a lot Cor

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • .NET ExcelLibrary Export Problems - .XLS corruption

    - by hamlin11
    The library is located here: http://code.google.com/p/excellibrary/ I'm using some basic code to create an .XLS file. When I open the file in Excel 2007, I get the following errors: I click yes, then I get: And just for fun, here's the XML error details (not very helpful) Here's the code that I'm using to generate the Excel file: Dim ds As New DataSet Dim dt1 As New DataTable("table 1") dt1.Columns.Add("column A", GetType(String)) dt1.Columns.Add("column B", GetType(String)) dt1.Rows.Add("test 1", "Test 2") dt1.Rows.Add("test 3", "Test 4") ds.Tables.Add(dt1) ExcelLibrary.DataSetHelper.CreateWorkbook("c:/temp/test1.xls", ds) Note: I added a reference to the DLL provided by the project download page and have an "Imports ExcelLibrary.Office.Excel" to link up with it Any ideas on what the corruption is and/or how to fix it? Thanks

    Read the article

  • Mixed-mode C++/CLI crashing: heap corruption in atexit (static destructor registration)

    - by thaimin
    I am working on deploying a program and the codebase is a mixture of C++/CLI and C#. The C++/CLI comes in all flavors: native, mixed (/clr), and safe (/clr:safe). In my development environment I create a DLL of all the C++/CLI code and reference that from the C# code (EXE). This method works flawlessly. For my releases that I want to release a single executable (simply stating that "why not just have a DLL and EXE separate?" is not acceptable). So far I have succeeded in compiling the EXE with all the different sources. However, when I run it I get the "XXXX has stopped working" dialog with options to Check online, Close and Debug. The problem details are as follows: Problem Event Name: APPCRASH Fault Module Name: StackHash_8d25 Fault Module Version: 6.1.7600.16559 Fault Module Timestamp: 4ba9b29c Exception Code: c0000374 Exception Offset: 000cdc9b OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: 8d25 Additional Information 2: 8d25552d834e8c143c43cf1d7f83abb8 Additional Information 3: 7450 Additional Information 4: 74509ce510cd821216ce477edd86119c If I debug and send it to Visual Studio, it reports: Unhandled exception at 0x77d2dc9b in XXX.exe: A heap has been corrupted Choosing break results in it stopping at ntdll.dll!77d2dc9b() with no additional information. If I tell Visual Studio to continue, the program starts up fine and seems to work without incident, probably since a debugger is now attached. What do you make of this? How do I avoid this heap corruption? The program seems to work fine except for this. My abridged compilation script is as follows (I have omitted my error checking for brevity): @set TARGET=x86 @set TARGETX=x86 @set OUT=%TARGETX% @call "%VS90COMNTOOLS%\..\..\VC\vcvarsall.bat" %TARGET% @set WIMGAPI=C:\Program Files\Windows AIK\SDKs\WIMGAPI\%TARGET% set CL=/Zi /nologo /W4 /O2 /GS /EHa /MD /MP /D NDEBUG /D _UNICODE /D UNICODE /D INTEGRATED /Fd%OUT%\ /Fo%OUT%\ set INCLUDE=%WIMGAPI%;%INCLUDE% set LINK=/nologo /LTCG /CLRIMAGETYPE:IJW /MANIFEST:NO /MACHINE:%TARGETX% /SUBSYSTEM:WINDOWS,6.0 /OPT:REF /OPT:ICF /DEFAULTLIB:msvcmrt.lib set LIB=%WIMGAPI%;%LIB% set CSC=/nologo /w:4 /d:INTEGRATED /o+ /target:module :: Compiling resources omitted @set CL_NATIVE=/c /FI"stdafx-native.h" @set CL_MIXED=/c /clr /LN /FI"stdafx-mixed.h" @set CL_PURE=/c /clr:safe /LN /GL /FI"stdafx-pure.h" @set NATIVE=... @set MIXED=... @set PURE=... cl %CL_NATIVE% %NATIVE% cl %CL_MIXED% %MIXED% cl %CL_PURE% %PURE% link /LTCG /NOASSEMBLY /DLL /OUT:%OUT%\core.netmodule %OUT%\*.obj csc %CSC% /addmodule:%OUT%\core.netmodule /out:%OUT%\GUI.netmodule /recurse:*.cs link /FIXED /ENTRY:GUI.Program.Main /OUT:%OUT%\XXX.exe ^ /ASSEMBLYRESOURCE:%OUT%\core.resources,XXX.resources,PRIVATE /ASSEMBLYRESOURCE:%OUT%\GUI.resources,GUI.resources,PRIVATE ^ /ASSEMBLYMODULE:%OUT%\core.netmodule %OUT%\gui.res %OUT%\*.obj %OUT%\GUI.netmodule Update 1 Upon compiling this with debug symbols and trying again, I do in fact get more information. The call stack is: msvcr90d.dll!_msize_dbg(void * pUserData, int nBlockUse) Line 1511 + 0x30 bytes msvcr90d.dll!_dllonexit_nolock(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 295 + 0xd bytes msvcr90d.dll!__dllonexit(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 273 + 0x11 bytes XXX.exe!_onexit(int (void)* func) Line 110 + 0x1b bytes XXX.exe!atexit(void (void)* func) Line 127 + 0x9 bytes XXX.exe!`dynamic initializer for 'Bytes::Null''() Line 7 + 0xa bytes mscorwks.dll!6cbd1b5c() [Frames below may be incorrect and/or missing, no symbols loaded for mscorwks.dll] ... The line of my code that 'causes' this (dynamic initializer for Bytes::Null) is: Bytes Bytes::Null; In the header that is declared as: class Bytes { public: static Bytes Null; } I also tried doing a global extern in the header like so: extern Bytes Null; // header Bytes Null; // cpp file Which failed in the same way. It seems that the CRT atexit function is responsible, being inadvertently required due to the static initializer. Fix As Ben Voigt pointed out the use of any CRT functions (including native static initializers) requires proper initialization of the CRT (which happens in mainCRTStartup, WinMainCRTStartup, or _DllMainCRTStartup). I have added a mixed C++/CLI file that has a C++ main or WinMain: using namespace System; [STAThread] // required if using an STA COM objects (such as drag-n-drop or file dialogs) int main() { // or "int __stdcall WinMain(void*, void*, wchar_t**, int)" for GUI applications array<String^> ^args_orig = Environment::GetCommandLineArgs(); int l = args_orig->Length - 1; // required to remove first argument (program name) array<String^> ^args = gcnew array<String^>(l); if (l > 0) Array::Copy(args_orig, 1, args, 0, l); return XXX::CUI::Program::Main(args); // return XXX::GUI::Program::Main(args); } After doing this, the program now gets a little further, but still has issues (which will be addressed elsewhere): When the program is solely in C# it works fine, along with whenever it is just calling C++/CLI methods, getting C++/CLI properties, and creating managed C++/CLI objects Events added by C# into the C++/CLI code never fire (even though they should) One other weird error is that an exception happens is a InvalidCastException saying can't cast from X to X (where X is the same as X...) However since the heap corruption is fixed (by getting the CRT initialized) the question is done.

    Read the article

  • Heap corruption detected error when attempting to free pointer

    - by AndyGeek
    Hi, I'm pretty new to C++ and have run into a problem which I have not been able to solve. I'm trying to convert a System::String to a wchar_t pointer that I can keep for longer than the scope of the function. Once I'm finished with it, I want to clean it up properly. Here is my code: static wchar_t* g_msg; int TestConvert() { pin_ptr<const wchar_t> wchptr = PtrToStringChars("Test"); g_msg = (wchar_t*)realloc(g_msg, wcslen(wchptr) + 1); wcscpy(g_msg, wchptr); free (g_msg); // Will be called from a different method } When the free is called, I'm getting "HEAP CORRUPTION DETECTED: after Normal block (#137) at 0x02198F90." Why would I be getting this error? Andrew L

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • mysqldump triggering repair of MySQL tables

    - by Rhodri
    I have an automated backup of a 6 Gigabyte MySQL database running very two hours. I also have a script which checks every minute for the need to repair MySQL tables. Increasingly I'm getting tables having to be repaired during the backup process with the message returned of: Auto-increment value: 0 is smaller than max used value: xx Is this being caused by corruption? Are the two scripts conflicting? Any ideas?

    Read the article

  • Can a hard poweroff / outage / crash corrupt VMware snapshots?

    - by basic6
    Assuming a host system is running virtual machines (in VMware Workstation) and all their data is on a reliable storage (so no data corruption due to hdd failure). If that host crashes (kernel panic) while a vm is running, files on the virtual filesystem could be corrupted. But there's a snapshot (of the vm), taken before the crash. Is it safe to assume that reverting to the snapshot, the vm will be back in a clean state - or is there any way that this snapshot could have been corrupted by the crash?

    Read the article

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • SQL Compact Edition database corruption

    - by jdv
    Hi, Our product is using MS SQL Compact Edition on a Windows machine (laptop). It's basically a metadata index for files we have on the filesystem. Recently we have seen databases getting corrupted. This happens when the machine is very busy moving files around and has to do a tiny bit of database changes at the same time. I was somewhat shocked that was at all possible. It was my expectation that the database would stay coherent whatever the circumstances. Of course we are doing something wrong. Things we have checked so far are: Use of only one db connection per thread specify the maximum size when opening the database The database is accessed only by one application, a .net based windows service. Are there other gotcha's?

    Read the article

  • Detecting metadata-only read requests in windows filesystem

    - by HyLian
    Hello, I'm developing a kind of filesystem driver. All of read requests that windows makes to my filesystem goes by the driver implementation. I would like to distinguish between "normal" read requests and those who want to get only the metadata from the file. ( Windows reads first 4K of the file and then stop reading ). Does Windows mark this metadata reads in some way? It would be very useful in order to treat that two kind of operations in a different way. In a typical CreateFile call, we have AccessMode, ShareMode, CreationDisposition and FlagsAndAttributes parameters ( being DWORD ), i'm not sure if it's possible to extract some clue of the operation requested. Thanks for reading :)

    Read the article

  • Problem using delete[] (Heap corruption) when implementing operator+= (C++)

    - by Darel
    I've been trying to figure this out for hours now, and I'm at my wit's end. I would surely appreciate it if someone could tell me when I'm doing wrong. I have written a simple class to emulate basic functionality of strings. The class's members include a character pointer data (which points to a dynamically created char array) and an integer strSize (which holds the length of the string, sans terminator.) Since I'm using new and delete, I've implemented the copy constructor and destructor. My problem occurs when I try to implement the operator+=. The LHS object builds the new string correctly - I can even print it using cout - but the problem comes when I try to deallocate the data pointer in the destructor: I get a "Heap Corruption Detected after normal block" at the memory address pointed to by the data array the destructor is trying to deallocate. Here's my complete class and test program: #include <iostream> using namespace std; // Class to emulate string class Str { public: // Default constructor Str(): data(0), strSize(0) { } // Constructor from string literal Str(const char* cp) { data = new char[strlen(cp) + 1]; char *p = data; const char* q = cp; while (*q) *p++ = *q++; *p = '\0'; strSize = strlen(cp); } Str& operator+=(const Str& rhs) { // create new dynamic memory to hold concatenated string char* str = new char[strSize + rhs.strSize + 1]; char* p = str; // new data char* i = data; // old data const char* q = rhs.data; // data to append // append old string to new string in new dynamic memory while (*p++ = *i++) ; p--; while (*p++ = *q++) ; *p = '\0'; // assign new values to data and strSize delete[] data; data = str; strSize += rhs.strSize; return *this; } // Copy constructor Str(const Str& s) { data = new char[s.strSize + 1]; char *p = data; char *q = s.data; while (*q) *p++ = *q++; *p = '\0'; strSize = s.strSize; } // destructor ~Str() { delete[] data; } const char& operator[](int i) const { return data[i]; } int size() const { return strSize; } private: char *data; int strSize; }; ostream& operator<<(ostream& os, const Str& s) { for (int i = 0; i != s.size(); ++i) os << s[i]; return os; } // Test constructor, copy constructor, and += operator int main() { Str s = "hello"; // destructor for s works ok Str x = s; // destructor for x works ok s += "world!"; // destructor for s gives error cout << s << endl; cout << x << endl; return 0; }

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >