Search Results

Search found 68654 results on 2747 pages for 'file corruption'.

Page 2/2747 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • yum update failed

    - by Nemanja Djuric
    I have problem doint yum update on my OpenVZ VPS i get this error message : (56/69): glibc-devel-2.5-81.el5_8.7.x86_64.rpm | 2.4 MB 00:00 (57/69): libstdc++-devel-4.1.2-52.el5_8.1.x86_64.rpm | 2.8 MB 00:00 (58/69): binutils-2.17.50.0.6-20.el5_8.3.x86_64.rpm | 2.9 MB 00:00 (59/69): cpp-4.1.2-52.el5_8.1.x86_64.rpm | 2.9 MB 00:00 (60/69): device-mapper-multipath-0.4.7-48.el5_8.1.x86_64 | 3.0 MB 00:00 (61/69): mysql-5.1.58-jason.1.x86_64.rpm | 3.5 MB 00:03 (62/69): coreutils-5.97-34.el5_8.1.x86_64.rpm | 3.6 MB 00:00 (63/69): gcc-c++-4.1.2-52.el5_8.1.x86_64.rpm | 3.8 MB 00:00 (64/69): glibc-2.5-81.el5_8.7.x86_64.rpm | 4.8 MB 00:01 (65/69): gcc-4.1.2-52.el5_8.1.x86_64.rpm | 5.3 MB 00:01 (66/69): glibc-2.5-81.el5_8.7.i686.rpm | 5.4 MB 00:01 (67/69): python-libs-2.4.3-46.el5_8.2.x86_64.rpm | 5.9 MB 00:01 (68/69): mysql-server-5.1.58-jason.1.x86_64.rpm | 13 MB 00:07 (69/69): glibc-common-2.5-81.el5_8.7.x86_64.rpm | 16 MB 00:03 -------------------------------------------------------------------------------- Total 2.4 MB/s | 106 MB 00:44 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysql_find_rows from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58- jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 Error Summary Thank you for help, Best regards, Nemanja

    Read the article

  • File corruption

    - by firesoul
    Recently, almost every file I try to copy into my HDDs gets corrupted. When writing to the disk, some portion of the file is replaced by random bytes (Checked using Diff apps) There's no problem reading existing files. I'm pretty sure it's not the HDD, becasue I've tried another HDD (a brand new 1TB WD) and the problem still remains. I have an AMD Athlon 64 X2 CPU, an ASUS M2NE motherboard, 2 WD SATA II HDDs (320GB + 1TB) and 3GB of Corsair RAM. My OS is Windows 7 (64 bit) Now, what could be the cause of this? If it's the motherboard, can it be fixed by using a PCI SATA card?

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • file corruption on read/write 2.6.32-22-server (happens across many kernels)

    - by Jonathan
    Hi Guys, I'm having an issue where after the server has been up for a period of time (~week/few days) the server will start reading corrupt data. For instance when I run a sha1sum of a file after a fresh boot it remains the same. However after a while I will start to get segfaults and from then on whenever I read this file I get a different sha1sum. I've checked S.M.A.R.T with long tests and I've run an extended memtest86+(12 passes) My lspci is as follows: 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (int gfx) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon HD 3300 Graphics 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller 02:00.0 Ethernet controller: Atheros Communications Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0) 03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 I could really use some help on this, do you have any idea what could cause this? It's really frustrating me as it seems to trigger entirely randomly and will not go away until I reboot. I'm also use KVM for virtualization as well as MD for software RAID on this server and the processor is a Phenom II X4 965. I don't believe it's the software raid however as this affects files also hosted on non-raid partitions so I don't know.

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Spurious alleged file corruption with an SSD

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Folder/File permission transfer between alike file structure

    - by Tyler Benson
    So my company has recently upgraded to a new SAN but the person who copied all the data over must have done a drag n' drop or basic copy to move everything. Apparently Xcopy is not something he cared to use. So now I am left with the task of duplicating all the permissions over. The structure has changed a bit ( as in more files/folders have been added) but for the most part has been stayed unchanged. I'm looking for suggestions to help automate this process. Can I use XCopy to transfer ONLY permissions to one tree from another? Would i just ignore any folders/permissions that don't line up correctly? Thanks a ton in advance, Tyler

    Read the article

  • Does using msysgit lead to repository corruption?

    - by randomusing
    While stumbling through the chromium code documentation, I came across this post: http://code.google.com/p/chromium/wiki/UsingGit#Windows If you are using msysgit, you are asking for trouble. Using both msysgit (including TortoiseGit) and cygwin's version of git is a path to lead to repository corruption so it's safer to stick with the cygwin's version. So if you still have msysgit in your PATH, you are on your own. Does this really happen? What causes the corruption?

    Read the article

  • C# File Exception: cannot access the file because it is being used by another process

    - by Lirik
    I'm trying to download a file from the web and save it locally, but I get an exception: C# The process cannot access the file 'blah' because it is being used by another process. This is my code: File.Create("data.csv"); // create the file request = (HttpWebRequest)WebRequest.CreateDefault(new Uri(url)); request.Timeout = 30000; response = (HttpWebResponse)request.GetResponse(); using (Stream file = File.OpenWrite("data.csv"), // <-- Exception here input = response.GetResponseStream()) { // Save the file using Jon Skeet's CopyStream method CopyStream(input, file); } I've seen numerous other questions with the same exception, but none of them seem to apply here. Any help?

    Read the article

  • How to Customize the File Open/Save Dialog Box in Windows

    - by Lori Kaufman
    Generally, there are two kinds of Open/Save dialog boxes in Windows. One kind looks like Windows Explorer, with the tree on the left containing Favorites, Libraries, Computer, etc. The other kind contains a vertical toolbar, called the Places Bar. The Windows Explorer-style Open/Save dialog box can be customized by adding your own folders to the Favorites list. You can, then, click the arrows to the left of the main items, except the Favorites, to collapse them, leaving only the list of default and custom Favorites. The Places Bar is located along the left side of the File Open/Save dialog box and contains buttons providing access to frequently-used folders. The default buttons on the Places Bar are links to Recent Places, Desktop, Libraries, Computer, and Network. However, you change these links to be links to custom folders of your choice. We will show you how to customize the Places Bar using the registry and using a free tool in case you are not comfortable making changes in the registry. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • Opening a file opens the folder the file is in, not the file itself

    - by Pepe Lebuntu
    Whenever I try to open a file (such as an .odt, or .doc) from say, the Dash or the Firefox Downloads, Ubuntu 11.10 opens Nautilus to the the folder where the file is, rather than just going to the application and loading the file straight away. In previous releases, when I clicked on a downloaded file, it just went straight to LibreOffice, and it was fine. This is adding a superfluous step in the process. How do I associate the correct extensions?

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • PHP IIS problems downloading file says it is corrupt

    - by Matt
    Hi, I am running PHP on IIS 6 with mssql. I have uploaded a file to my webserver through a php script. Upon checking the file on the server the file is ok and not corrupt. However, when i then have a link on my website to try and download the file, it says the file is corrupt. I know the file isnt corrupt as i can view it perfectly if i look at the file on the server. Is seems like this is a common problem as a similar problem was posted here: http://www.bigresource.com/Tracker/Track-php-1pAakBhT/ Any help would be much appreciated. Thanks, M My download code is as follows: $filesize = $rows->filesize; $filepath = $rows->filepath; header("Content-Disposition: attachment; filename=$filename"); header("Content-length: $filesize"); header("Content-type: application/pdf"); header("Cache-control: must-revalidate"); header("Content-Description: PHP Generated Data"); readfile($filepath);

    Read the article

  • help! corrupt file recovery

    - by TheBumpper
    My supervisor computer crashed last night, and I'm trying to help him out. He made an R script but when he tried to open it, it was empty. But for some reason the file is 7.9kb so it should not be empty i think... anyway when i tried to open it, Gedit gave this error: "The file you opened has some invalid characters. If you continue editing this file you could corrupt this document. You can also choose another character encoding and try again." and the options to encode the characters. It looked like this(with a red background): \00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\ My question is, is there a way to restore the file? i hope someone has a brilliant idea

    Read the article

  • Why am I getting 'Heap Corruption'?

    - by fneep
    Please don't crucify me for this one. I decided it might be good to use a char* because the string I intended to build was of a known size. I am also aware that if timeinfo-tm_hour returns something other than 2 digits, things are going to go badly wrong. That said, when this function returns VIsual Studio goes ape at me about HEAP CORRUPTION. What's going wrong? (Also, should I just use a stringbuilder?) void cLogger::_writelogmessage(std::string Message) { time_t rawtime; struct tm* timeinfo = 0; time(&rawtime); timeinfo = localtime(&rawtime); char* MessageBuffer = new char[Message.length()+11]; char* msgptr = MessageBuffer; _itoa(timeinfo->tm_hour, msgptr, 10); msgptr+=2; strcpy(msgptr, "::"); msgptr+=2; _itoa(timeinfo->tm_min, msgptr, 10); msgptr+=2; strcpy(msgptr, "::"); msgptr+=2; _itoa(timeinfo->tm_sec, msgptr, 10); msgptr+=2; strcpy(msgptr, " "); msgptr+=1; strcpy(msgptr, Message.c_str()); _file << MessageBuffer; delete[] MessageBuffer; }

    Read the article

  • GNOME session not starting after filesystem corruption

    - by user3215
    I'm running Ubuntu 9.10 desktop edition. Suddenly today /home became corrupted and I was prompted to run fsck manually. I ran fsck -y /home and rebooted the system. The system booted but I got no GUI interface (GNOME session) but a black screen with a user prompt instead. Any tricks here to start my system normally? Any help is greatly appreciated. EDIT:1 The error were similar to the the following(may be with some mistakes as I had to type it manually): machine1 login: root password: at login Sun Jan 16 15:30:46 IST 2011 on tty1 EXT3-fs error (devie sda1): ext3_lookup :deleted inode referenced aborting journal on device sda1 Remounting filesystem read-only root@machine1:~# startx ktemp: failed to create file via template `/tmp/serverauth.xxxxxxxxxxx: Read-only file /usr/bin/startx: line 157: cannot create temp file for here-document: Read-only file xauth: error in locking authority file /root/.Xauthority /usr/bin/startx: line 173: cannnot create temp file for here-document: Read-only file xauth: error in locking authority file /root/.Xauthority /usr/bin/startx: line 173: cannnot create temp file for here-document: Read-only file X: cannot stat /tmp/.x11-unix (No such file or directory), aborting giving up. xinit: No such file or directory (errno 2): unable to connect to xserver xinit: No such process (errno 3): Server error xauth: error in locking authority file /root/.Xauthority

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • SharePoint Database security corruption

    - by H(at)Ni
    Hello, One time I faced an issue where my customer is having an HTTP 500 internal server error while trying to access any SharePoint site. The problem appeared once he moved back and forth with inheriting/breaking inheritance of permissions over different levels in the site collection. "Security corruption in database" sounds very tough for a customer running a production portal with a backup that can make him lose around 3 weeks of valuable data. However, the solution tends not to be that hard, there's an stsadm command that help us detect the corruption and even delete the orphaned items causing the corruption. Follow these steps: a. stsadm -o databaserepair -url http://SITEURL -databasename DBNAME                and it returned some orphaned items.            b. stsadm -o databaserepair -url http://SITEURL -databasename DBNAME -deletecorruption                and it removed the orphaned items. Cheers,

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >