Search Results

Search found 70026 results on 2802 pages for 'file recovery'.

Page 38/2802 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • "Windows cannot find" file when opening Excel spreadsheet

    - by DanH
    For all of my Excel spreadsheets when I attempt to open them (by double-clicking in explorer) I get the message "Windows cannot find C:...". The files are there, and are valid zip files as seen by 7-Zip. There are no apparent lock files in the directories. I did just install Norton-360 over the weekend (replacing Kasperski), but the Norton log shows no events related to Excel. However, while installing Norton I did reboot with some Excel files open. Presumably something is hosed in my Excel configuration but I don't know what. Update (Before actually posting) -- I found an article that suggested turning off Advanced Option "Ignore other applications that use DDE", then doing excel.exe /unregister followed by excel.exe /register. I tried this but I suspect that the two Excel calls were ignored (Excel opened, but no obvious change). With that option off the spreadsheets load OK, but not with it on. And, curiously, spreadsheets load OK with the option on or off if I open Excel first and then open the spreadsheet in it. Does anyone have any idea what effect leaving that option off will have? Update 2 -- I tried running the "repair" option. It said it corrected a couple of config things (without saying what they were), but I still get a failure if I double-click an Excel file with the "Ignore other applications..." option checked. Update 3 -- I managed to fix this problem, but failed at the time to come back and say what I did, and now I can't remember for sure. But I think it had something to do with "Options"/"Save" and some of the values there. Something to do with AutoRecover, perhaps. (Possibly there was a file in recovery and I had to specify "Disable AutoRecover for this workbook" to let bring-up get past it. Or perhaps the AutoRecover file location was hosed.) Anyway, if it happens to someone else, and you find the fix, post it below and I'll mark it answered.

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • How to change inode change time of a file?

    - by Emerald214
    I tried to use touch -d "2011-09-15 16:50" test.txt but it just modify last access time and last modified time. Access: 2011-09-15 16:50:00.000000000 +0700 Modify: 2011-09-15 16:50:00.000000000 +0700 Change: 2011-11-15 16:56:55.620124149 +0700 How to change the last change time? I want to do this because my crontab use filectime($file) to get the last changed time, so I need to create a file of two months ago to test something.

    Read the article

  • Move data from others user accounts in my user account

    - by user118136
    I had problems with compiz setting and I make multiple accounts, now I want to transfer my information from all deleted users in my current account, some data I can not copy because I am not right to read, I type in terminal "sudo nautilus" and I get the permission for read, but the copied data is available only for superusers and I must charge the permissions for each file and each folder. How I can copy the information with out the superuser rights OR how I can charge the permissions for selected folder and all files and folders included in it?

    Read the article

  • How to prevent unison syncronize file when file process uploading

    - by user134600
    I use CentOS 5.8 Final. My situation is I running unison with cron where script below : */1 * * * * /usr/bin/unison /dev/null 2&1 and default profile like below : root = /var/www root = ssh://web02.example.com//var/www auto=true batch=true confirmbigdel=true fastcheck=true group=true owner=true prefer=newer silent=true times=true So in every minutes will syncronized www folder . My problem are : I upload file with size bigger than 10 MB to www from client with user1 permission where www folder is user1 owner. file in processing uploading then unison running in that minute and suddenly file upload owner changed to root:root When I editing file in www folder then I save when unison running, file owner changed to root:root where should be user1:user1 Is there anyone know about this problem?

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • linux + change/edit file without effecting on file date

    - by yael
    I want to edit some file in my linux for example ls -ltr /etc/some_file -rw-r--r-- 1 root root 188 Jul 1 2010 sysstat . echo "Server101_IP=187.0.98.4" >> /etc/some_file . I expect to get the following date: ls -ltr /etc/some_file -rw-r--r-- 1 root root 188 Jul 1 2010 sysstat but the date & time of this file must be not change ! I just want to edit file but I Wondering how to change the file without effect on the date & time of the file - is it possible ?

    Read the article

  • File corruption when copying different file on raid 1

    - by Stephan
    I have a RAID 1 configuration of 2 1TB drives on a Fedora 12 box. Most of what is stored there are video files that are numerical labeled. The problem I'm having is that I had one of the video files get corrupted. I copied a replacement from a backup and replaced the bad file and now it works fine. However, after doing this the next numbered file goes from 350MB to 200KB and all but about .5 second of video disappears. If I then replace that file it happens to the next one down the line. Ex: Replace corrupt file 1.avi and file 2.avi shrinks to 200KB. Replace now corrupted 2.avi and it works but 3.avi gets screwed up. I have run SMART tests on the drives and they report fine. Does anyone have any tests I can run to try to figure out what is going on? EDIT: It is a two disk software RAID 1 with an ext4 filesystem

    Read the article

  • unable to copy file to folder, permission denied without explanation

    - by ValekHalfHeart
    Recently Norton Internet security deleted ml.exe (an assembler I use to program in masm32) off of my computer, thinking that one of the programs I had written with it was a virus (it was most certainly not). Fortunately, I had a copy of ml.exe backed up in an external hard drive, and tried to copy it over to my computer. The old ml.exe was located in C:\masm32\bin, so I tried to copy the new one to that location. After disabling Norton (which had opened the folder and preventing me from accessing it), I am still unable to copy the new file to C:\masm32\bin. When I tried, Windows announced that I would need Administrator permission to copy the file. Since I'm an admin, I figured this wouldn't be a problem although it was unexpected, as I have never had to provide administrator permission to access this folder before. However, instead of prompting me to enter my password, Windows simply refuses to copy the file: I repeat, I was not asked to provide a password. It simply says that I do not have permission. Does anyone know what's happening and how to fix it? Is Norton still causing problems, or it something else?

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • WUBI restore using Windows system repair disc?

    - by davidp21747
    Can I use a Windows 7 system restore disc and system image disc (of Windows drive C) to restore WUBI if my computer crashes? I use Deja Dup for backups, and have a CD of the WUBI .iso installation file? I don't want to run Ubuntu in a separate partition because I find it convenient to run Ubuntu as a WUBI because I can use Windows drivers etc. Thanks for any advice. I have looked for an answer, but can't find one that answers this particular question.

    Read the article

  • Input multiple file names in windows open file dialog box

    - by goodiet
    Windows 7 allows you to select multiple files to open at once by using ctrl or shift key. The "File Name" input field at the bottom of the dialog box would auto populate with the following sample: "aaa.txt" "bbb.txt" "ccc.txt" "ddd.txt" I have 14,000 files in a folder and I only need a range of files (approx 500). When I use the shift key to select a range of files, the "File Name" field auto populates all 500 file names. Windows would cut me off at the 260th character when I try to paste in a pre-generated string into the "File Name" field. Is there a way to bypass the 260 character limit so it would accept my entire string with 500 file names?

    Read the article

  • Text File Cannot read by Batch File

    - by Typowarrior
    I have the problem where TXT file that batch create can't be read. it turns to ECHO OFF Result. Here is 1st code need to be run. echo. wmic /output:huhu.txt Path CIM_DataFile WHERE Name='C:\\Users\\uJaNbaTus\\Desktop\\HyperTerminal.exe' Get Version echo. Then I create another .bat file with this code and run it. echo. setlocal ENABLEDELAYEDEXPANSION set revision= for /f "delims=" %%a in (huhu.txt) do ( set line=%%a if "x!line:~0,8!"=="xVersion " ( set revision=!line:~8! ) ) echo !revision! echo. endlocal When I run this .bat file the result Showing ECHO off. Btw if I create another file using notepad and replace (huhu.txt) I didn't get any error and the output come from txt file.

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

  • /dev/null file became regular file

    - by user197719
    In our production server suddenly /dev/null became a regular file and due to this sshd service got stopped and not able to login the server. And also we tried to the below steps to configure back to character device file, rm -rf /dev/null mknod /dev/null c 1 3 As soon as we run the rm command /dev/null is being re-created as a regular file before mknod can run. We can't figure out how this happening and which component is creating this file. So until we solve this issue we are unable to create /dev/null as character device file.

    Read the article

  • Windows Batch Script to Replace Environment Variables in a File

    - by skb
    Hi. I want to write a batch file that will take the contents of a file, and replace any environment variable references inside the file with the actual environment variable values. Is this possible? Basically, if a file had this: %PROGRAM FILES%\Microsoft SQL Server\ then I would want the file contents to become: C:\Program Files\Microsoft SQL Server\ after the batch script ran. This is just one example, but I want ALL environment variables to be expanded. Thanks in advance for any help!

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • \n not working in my fwrite()

    - by brett
    Not sure what could be the problem. I'm dumping data from an array $theArray into theFile.txt, each array item on a separate line. $file = fopen("theFile.txt", "w"); foreach ($theArray as $arrayItem){ fwrite($file, $arrayItem . '\n'); } fclose($file); Problem is when I open theFile.txt, I see the \n being outputted literally. Also if I try to programmatically read the file line by line (just in case lines are there), it shows them as 1 line meaning \n are really not having their desired effect.

    Read the article

  • Android ACTION_SEND Attached File

    - by Sean
    When you attach a file to an e-mail using the ACTION_SEND intent (with the extra EXTRA_STREAM) does the e-mail app copy that attached file to its own location? My app creates a file and attaches it to an email, but this can happen many times and I would like to be able to delete this file when it is no longer needed (so it doesn't flood the user's storage with junk data). Is the file safe to delete after the e-mail intent has started?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >