Search Results

Search found 68654 results on 2747 pages for 'file corruption'.

Page 27/2747 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • linux + change/edit file without effecting on file date

    - by yael
    I want to edit some file in my linux for example ls -ltr /etc/some_file -rw-r--r-- 1 root root 188 Jul 1 2010 sysstat . echo "Server101_IP=187.0.98.4" >> /etc/some_file . I expect to get the following date: ls -ltr /etc/some_file -rw-r--r-- 1 root root 188 Jul 1 2010 sysstat but the date & time of this file must be not change ! I just want to edit file but I Wondering how to change the file without effect on the date & time of the file - is it possible ?

    Read the article

  • unable to copy file to folder, permission denied without explanation

    - by ValekHalfHeart
    Recently Norton Internet security deleted ml.exe (an assembler I use to program in masm32) off of my computer, thinking that one of the programs I had written with it was a virus (it was most certainly not). Fortunately, I had a copy of ml.exe backed up in an external hard drive, and tried to copy it over to my computer. The old ml.exe was located in C:\masm32\bin, so I tried to copy the new one to that location. After disabling Norton (which had opened the folder and preventing me from accessing it), I am still unable to copy the new file to C:\masm32\bin. When I tried, Windows announced that I would need Administrator permission to copy the file. Since I'm an admin, I figured this wouldn't be a problem although it was unexpected, as I have never had to provide administrator permission to access this folder before. However, instead of prompting me to enter my password, Windows simply refuses to copy the file: I repeat, I was not asked to provide a password. It simply says that I do not have permission. Does anyone know what's happening and how to fix it? Is Norton still causing problems, or it something else?

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Input multiple file names in windows open file dialog box

    - by goodiet
    Windows 7 allows you to select multiple files to open at once by using ctrl or shift key. The "File Name" input field at the bottom of the dialog box would auto populate with the following sample: "aaa.txt" "bbb.txt" "ccc.txt" "ddd.txt" I have 14,000 files in a folder and I only need a range of files (approx 500). When I use the shift key to select a range of files, the "File Name" field auto populates all 500 file names. Windows would cut me off at the 260th character when I try to paste in a pre-generated string into the "File Name" field. Is there a way to bypass the 260 character limit so it would accept my entire string with 500 file names?

    Read the article

  • Text File Cannot read by Batch File

    - by Typowarrior
    I have the problem where TXT file that batch create can't be read. it turns to ECHO OFF Result. Here is 1st code need to be run. echo. wmic /output:huhu.txt Path CIM_DataFile WHERE Name='C:\\Users\\uJaNbaTus\\Desktop\\HyperTerminal.exe' Get Version echo. Then I create another .bat file with this code and run it. echo. setlocal ENABLEDELAYEDEXPANSION set revision= for /f "delims=" %%a in (huhu.txt) do ( set line=%%a if "x!line:~0,8!"=="xVersion " ( set revision=!line:~8! ) ) echo !revision! echo. endlocal When I run this .bat file the result Showing ECHO off. Btw if I create another file using notepad and replace (huhu.txt) I didn't get any error and the output come from txt file.

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

  • /dev/null file became regular file

    - by user197719
    In our production server suddenly /dev/null became a regular file and due to this sshd service got stopped and not able to login the server. And also we tried to the below steps to configure back to character device file, rm -rf /dev/null mknod /dev/null c 1 3 As soon as we run the rm command /dev/null is being re-created as a regular file before mknod can run. We can't figure out how this happening and which component is creating this file. So until we solve this issue we are unable to create /dev/null as character device file.

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • Windows Batch Script to Replace Environment Variables in a File

    - by skb
    Hi. I want to write a batch file that will take the contents of a file, and replace any environment variable references inside the file with the actual environment variable values. Is this possible? Basically, if a file had this: %PROGRAM FILES%\Microsoft SQL Server\ then I would want the file contents to become: C:\Program Files\Microsoft SQL Server\ after the batch script ran. This is just one example, but I want ALL environment variables to be expanded. Thanks in advance for any help!

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • \n not working in my fwrite()

    - by brett
    Not sure what could be the problem. I'm dumping data from an array $theArray into theFile.txt, each array item on a separate line. $file = fopen("theFile.txt", "w"); foreach ($theArray as $arrayItem){ fwrite($file, $arrayItem . '\n'); } fclose($file); Problem is when I open theFile.txt, I see the \n being outputted literally. Also if I try to programmatically read the file line by line (just in case lines are there), it shows them as 1 line meaning \n are really not having their desired effect.

    Read the article

  • Android ACTION_SEND Attached File

    - by Sean
    When you attach a file to an e-mail using the ACTION_SEND intent (with the extra EXTRA_STREAM) does the e-mail app copy that attached file to its own location? My app creates a file and attaches it to an email, but this can happen many times and I would like to be able to delete this file when it is no longer needed (so it doesn't flood the user's storage with junk data). Is the file safe to delete after the e-mail intent has started?

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • How do I get the file size of a large (> 4 GB) file?

    - by endeavormac
    How can I get the file size of a file in C when the file size is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • visual c# open own file extention

    - by ecross
    hello everybody, first: i'm dutch so sorry if my english is not so good. I have made my own file type (.ddd) and I made a simple program to open this file type, but wenn i click on a .ddd file (on my desktop) my program opens only the file is not automaticly opend inside my program. how do I directly open the file in my program when it opens?

    Read the article

  • how can we achieve second application read that file when first application not modifying it

    - by soField
    i have two application first application is bash second is java which one of them is periodically deleting and recreating a specific file (first) the other one is also periodically reading this file and process it in it's own logic (second) how can we achieve second application read that file when first application not modifying it my aim is to force second app read the file only when content of file fully written inside it how can achieve this goal ?

    Read the article

  • uWSGI log file...permission denied to read file

    - by bkev
    I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so: Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334) Error opening file for reading: Permission denied Problem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. My Upstart conf file description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI \ --master \ --emperor /etc/uwsgi/vassals \ --die-on-term \ --auto-procname \ --no-orphans \ --logto $LOGTO \ --logdate My Vassal ini file: [uwsgi] # Variables base = /srv/env/mysiteenv # Generic Config uid = uwsgi gid = uwsgi socket = 127.0.0.1:5050 master = true processes = 2 reload-on-as = 128 harakiri = 60 harakiri-verbose = true auto-procname = true plugins = http,python cache = 2000 home = %(base) pythonpath = %(base)/mysite module = wsgi logto = /srv/log/mysite/uwsgi_error.log logdate = true

    Read the article

  • try to attach to a database file but can't browse folder which contains the file

    - by Chadworthington
    I am trying to attach to database file (*.mdf, *.ldf) that I placed in the same folder as all my other SQL Server databases. I begin the attach by attempting to browse to the folder which contains the db files as well as all of my active database files. I select "attach Database" and click the "Add" button to add a database to the list of databases to attach to. When I do so, I get this error: TITLE: Locate Database Files - BESI-CHAD ------------------------------ D:\SQLdata\MSSQL10_50.SQLBESI\MSSQL\DATA Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. ------------------------------ BUTTONS: OK ------------------------------ The path is correct and, as I mentioned, it contains all of my other database files so I wouldn't think that permissions should be an issue, but here is what I see for that folder: Any idea why I cannot browse to that folder and attach to the db files that I have place there?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >