Search Results

Search found 70026 results on 2802 pages for 'file recovery'.

Page 139/2802 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • Information on the BMPP File Extension/Format

    - by Angel Brighteyes
    I am looking for information on the file type BMPp. Namely I need an application that can create this file type, preferably open source or free. Wikipedia says for BMP File Format that 'BMPp' is a "type code", which is the "mechanism used by pre-OSX Macs ... to denote a files format..." (Look in the little info-box of general information under "Type code"). Continuing my research, I found an old 2009 archived mailing list "Re: Incorrect png file type 'PNG' that talks about something related to another problem a developer is having. In the response he talks about there being variant file types, and lists BMPp as being linked to an old version of Graphics Converter. The company Lemkesoft sells Graphics Converter, which I am not willing to purchase. I can't imagine that the only program in existence to make a BMPp file is that program. There has got to be another way to make that file type, other than creating a BMP file and just renaming it to BMPp (unless of course it is really that easy)? This is the first time I've run into this file format, and it took a bit on Google, Bing, and Wikipedia to find the information that I've posted here. Any further help would be appreciated.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • Linux: Find all symlinks of a given 'original' file? (reverse 'readlink')

    - by sdaau
    Hi all, Consider the following command line snippet: $ cd /tmp/ $ mkdir dirA $ mkdir dirB $ echo "the contents of the 'original' file" > orig.file $ ls -la orig.file -rw-r--r-- 1 $USER $USER 36 2010-12-26 00:57 orig.file # create symlinks in dirA and dirB that point to /tmp/orig.file: $ ln -s $(pwd)/orig.file $(pwd)/dirA/ $ ln -s $(pwd)/orig.file $(pwd)/dirB/lorig.file $ ls -la dirA/ dirB/ dirA/: total 44 drwxr-xr-x 2 $USER $USER 4096 2010-12-26 00:57 . drwxrwxrwt 20 root root 36864 2010-12-26 00:57 .. lrwxrwxrwx 1 $USER $USER 14 2010-12-26 00:57 orig.file -> /tmp/orig.file dirB/: total 44 drwxr-xr-x 2 $USER $USER 4096 2010-12-26 00:58 . drwxrwxrwt 20 root root 36864 2010-12-26 00:57 .. lrwxrwxrwx 1 $USER $USER 14 2010-12-26 00:58 lorig.file -> /tmp/orig.file At this point, I can use readling to see what is the 'original' (well, I guess the usual term here is either 'target' or 'source', but those in my mind can be opposite concepts as well, so I'll just call it 'original') file of the symlinks, i.e. $ readlink -f dirA/orig.file /tmp/orig.file $ readlink -f dirB/lorig.file /tmp/orig.file ... However, what I'd like to know is - is there a command I could run on the 'original' file, and find all the symlinks that point to it? In other words, something like (pseudo): $ getsymlinks /tmp/orig.file /tmp/dirA/orig.file /tmp/dirB/lorig.file Thanks in advance for any comments, Cheers!

    Read the article

  • No Preview Images in File Open Dialogs on Windows 7

    Ive been updating some file uploader code in my photoalbum today and while I was working with the uploader I noticed that the File Open dialog using Silverlight that handles the file selections didnt allow me to ever see an image preview for image files. It sure would be nice if I could preview the images Im about to upload before selecting them from a list. Heres what my list looked like: This is the Medium Icon view, but regardless of the views available including Content view only icons are...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTG Explains: What “Everything Is a File” Means on Linux

    - by Chris Hoffman
    One of the defining features of Linux and other UNIX-like operating systems is that “everything is a file.” This is an oversimplification, but understanding what it means will help you understand how Linux works. Many things on Linux appear in your file system, but they aren’t actually files. They’re special files that represent hardware devices, system information, and other things — including a random number generator. These special files may be located in pseudo or virtual file systems such as /dev, which contains special files that represent devices, and /proc, which contains special files that represent system and process information. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Exception:Cannot Start your application.The Workgroup information file is missing or opened exclusiv

    - by Jeev
    We were getting this error when trying   to connect  to a password protected access file. This is what the connection string looked likestring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";User Id=;Password=password";To fix the issue this is what we didstring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";Jet OLEDB:Database Password=password";  We removed the User id and changed the password to Jet OLEDB:Database Password Hope this helps someone   

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • File Upload multiple files in asp.net (similar to gmail)

    - by superstar
    Hi Guys, I need suggestions with regards to the multiple file upload using File Upload control in asp.net(along with C#). I have a File Upload Control, so i click the 'Browse' button and when i select a file from the select file dialog, i want the file to be shown as a link below the File Upload Control( somewhat similar to gmail). This file should be seen such a way that it can be deleted, if i wanted to. And also i should be able to upload another file from the File Upload control. All these files should be uploaded to a location when i use a button click event in the end. I think i have made myself clear. Any Suggestions are really helpful. Thanks.

    Read the article

  • What units does the ntp drift file use?

    - by arielf
    When the ntpd daemon is running, the file: /var/lib/ntp/ntp.drift gets updated periodically. Example: 17:20 hostname 118 ~> ls -l /var/lib/ntp/ntp.drift -rw-r--r-- 1 ntp ntp 7 May 20 16:46 /var/lib/ntp/ntp.drift # So it looks like it was last updated ~34 minutes ago The file has one number in it, for example, looking at a 4 virtual hosts, I find these values, respectively: -22.086 -10.214 -13.669 6.045 I assume these are seconds per day(?), but not sure. man ntpd mentions a different drift file /etc/ntp.drift which doesn't seem to exist. The man page doesn't explain what units are being used for the drift. Questions: Is /etc/ntp.drift actually /var/lib/ntp/ntp.drift on Ubuntu? What units is the drift expressed in? Thanks!

    Read the article

  • Display folder sizes in file manager

    - by wim
    In nautilus (or nemo) file manager, the "Size" column shows the filesize for files and the number of items contained in a folder for subdirectories: Number of items is not that important for me, it would be more useful if I could make this column show the total size contained under the directory. I had an extension on windows called foldersize which shows what I mean: I think it involved a service which ran in the background monitoring filesystem modifications in order to make sure the column was kept up to date. I am interested to know if there is any similar extension to nautilus, I would also be open to switching to another file manager to get this functionality. I am aware of the Disk Usage Analyser in Ubuntu, but what I'm looking for is a solution with file manager integration.

    Read the article

  • how to open a .pdf file in a panel or iframe using asp.net c#

    - by rahul
    I am trying to open a .pdf file on a button click. I want to open a .pdf file into a panel or some iframe. With the following code i can only open .pdf file in a separate window or in a save as mode. string filepath = Server.MapPath("News.pdf"); FileInfo file = new FileInfo(filepath); if (file.Exists) { Response.ClearContent(); Response.AddHeader("Content-Disposition", "inline; filename=" + file.Name); Response.AddHeader("Content-Length", file.Length.ToString()); Response.ContentType = ReturnExtension(file.Extension.ToLower()); Response.TransmitFile(file.FullName); Response.End(); } how to assign a iframe to the below line Response.AddHeader("Content-Disposition", "inline; filename=" + file.Name);

    Read the article

  • Running a .bash file in Eclipse

    - by Anne Ambe
    I know this is really an Eclipse issue but I can't seem to login in their forum. I am running eclipse juno for some c/c++ development.However, I wrote a .bash script that initiate the entire program.As input argument to this script, I have a a configuration file which is one directory lower than the .bash file. In terminal I just do: ./startenb.bash ./CONF/ANNE it runs just fine. How can I configure the external tools in eclipse to take this file path as input argument? Any help or old thread vaguely addressing this issue is highly welcome.

    Read the article

  • .NET Impersonate and file upload issues

    - by Jagd
    I have a webpage that allows a user to upload a file to a network share. When I run the webpage locally (within VS 2008) and try to upload the file, it works! However, when I deploy the website to the webserver and try to upload the file through the webpage, it doesn't work! The error being returned to me on the webserver says "Access to the path '\05prd1\emp\test.txt' is denied. So, obviously, this is a permissions issue. The network share is configured to allow full access both to me (NT authentication) and to the NETWORK SERVICE (which is .NET's default account and what we have set in our IIS application pool as the default user for this website). I have tried this with and without impersonation upon the webserver and neither way works, yet both ways work on my local machine (in other words, with and without impersonation works on my local machine). The code that does the file upload is below. Please note that the code below includes impersonation, but like I said above, I've tried it with and without impersonation and it's made no difference. if (fuCourses.PostedFile != null && fuCourses.PostedFile.ContentLength > 0) { System.Security.Principal.WindowsImpersonationContext impCtx; impCtx = ((System.Security.Principal.WindowsIdentity)User.Identity).Impersonate(); try { lblMsg.Visible = true; // The courses file to be uploaded HttpPostedFile file = fuCourses.PostedFile; string fName = file.FileName; string uploadPath = "\\\\05prd1\\emp\\"; // Get the file name if (fName.Contains("\\")) { fName = fName.Substring( fName.LastIndexOf("\\") + 1); } // Delete the courses file if it is already on \\05prd1\emp FileInfo fi = new FileInfo(uploadPath + fName); if (fi != null && fi.Exists) { fi.Delete(); } // Open new file stream on \\05prd1\emp and read bytes into it from file upload FileStream fs = File.Create(uploadPath + fName, file.ContentLength); using (Stream stream = file.InputStream) { byte[] b = new byte[4096]; int read; while ((read = stream.Read(b, 0, b.Length)) > 0) { fs.Write(b, 0, read); } } fs.Close(); lblMsg.Text = "File Successfully Uploaded"; lblMsg.ForeColor = System.Drawing.Color.Green; } catch (Exception ex) { lblMsg.Text = ex.Message; lblMsg.ForeColor = System.Drawing.Color.Red; } finally { impCtx.Undo(); } } Any help on this would be very appreciated!

    Read the article

  • How to call a JavaScript function from one frame to another in Chrome/Webkit with file protocol

    - by bambax
    I have developed an application that has a list of items in one frame; when one clicks on an item it does something in another frame (loads an image). This used to work fine in all browsers, including Chrome 3; now it still works fine in Firefox but in recent versions of Chrome (I believe since 4) it throws this error: Unsafe JavaScript attempt to access frame with URL (...) from frame with URL (...). Domains, protocols and ports must match. This is obviously a security "feature" but is it possible to get around it? Here is a simple test: index.html: <html> <frameset cols="50%,50%"> <frame src="left.html" name="left"/> <frame src="right.html" name="right"/> </frameset> </html> left.html: <html> <body> <a href="javascript:parent.right.test('hello');">click me</a> </body> </html> right.html: <html> <body> <script> function test(msg) { alert(msg); } </script> </body> </html> The above works in Firefox 3.6 and Chrome 3 but in Chrome 5 it throws the above error... Edit: added the @cols attribute to the frameset element in fact it works in Chrome if and only if the pages are served with the http protocol (and from the same domain) but my problem is when pages are local and served from a file:// protocol. Then it works in Firefox (all versions) and Chrome 3 but not Chrome 5 (I don't have Chrome 4 so I'm not shure about that specific version (and don't know if it's even possible to download a specific Chrome version?) -- but for Chrome 5 I'm very sure it doesn't work).

    Read the article

  • set installer background / not able to run early_command in custom preseed file (precise)

    - by user73093
    I have a custom preseed file for a Precise Live CD (which is well loaded on boot, I checked syslog for that). My initial problem is that when booting in install mode (default behavior for a Live CD), ubiquity runs X with a default wallpaper which is hardcoded to /usr/share/backgrounds/warty-final-ubuntu.png in Ubiquity code. So my idea was to run early_command (https://help.ubuntu.com/12.04/installation-guide/i386/preseed-advanced.html) to copy my custom wallpaper over /usr/share/backgrounds/warty-final-ubuntu.png. Assuming my custom wallpaper allready resides on the rootfs in /usr/share/backgrounds. But... It seems the early_command never runs (and I'm sure the preseed file is taken into account) Here is what I have added to my preseed file: d-i preseed/early_command string cp /usr/share/backgrounds/mywallpaper-defaults.jpg /usr/share/backgrounds/warty-final-ubuntu.png Even this one is never run: d-i preseed/early_command string /usr/bin/touch /tmp/testearly Thanks for helping !!

    Read the article

  • minidlna and samsung tv file format doesn't support

    - by Vasya
    I have PC with XUbuntu 12.04 and Samsung 27A950, so I want to use my PC like dlna server. But I can run file of any format on my TV, get error: "File format doesn't support". I try to open *.avi, *.mkv and *.jpg files the same result. In error log I can see: [2013/06/27 12:34:03] upnphttp.c:1907: error: Error opening /home/family/media/file1.mkv [2013/06/27 12:34:06] upnphttp.c:1907: error: Error opening /home/family/media/file2.avi So, I have no idea why it doesn't work. Any suggestions? miniDLNA version: Version 1.24.1-stedy Config file is default, only few directories added. With version of the default Ubuntu repository was the same. Some time ago I tried mediatomb, had the same error.

    Read the article

  • Recovering Excel file

    - by Kristin Rousselo
    I have auto recovery on my Excel program and I save my work several times a day. Tonight, I turned on the computer and it had shut down down for some apparent reason. When I pulled up my Excel program it only showed one auto recovery file form 8am this morning even though I worked/added onto my Excel sheet for several hours this afternoon. Is there any way to recover my work I did this afternoon?

    Read the article

  • How to commit a file conversion?

    - by l0b0
    Say you've committed a file of type foo in your favorite vcs: $ vcs add data.foo $ vcs commit -m "My data" After publishing you realize there's a better data format bar. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m "Preparing to use format bar" $ foo2bar --output data.bar data.bar $ vcs commit -m "Actual foo to bar conversion" or $ foo2bar --output data.foo data.foo $ vcs commit -m "Converted data to format bar" $ vcs mv data.foo data.bar $ vcs commit -m "Renamed to fit data type" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m "Converted data to format bar" In the first two cases the conversion is not an atomic operation and the file extension is "lying" in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be difficult to trace the file history across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • `power/persist` file not found in USB device sysfs directory

    - by intuited
    The file /usr/share/doc/linux-doc/usb/persist.txt.gz mentions that the USB-persist capability can be enabled for a given USB device by writing 1 to the file persist in that device's directory in /sys/bus/usb/devices/$device/power. This is said — if I understood correctly — to allow mountings of volumes on the drive to persist across power loss during suspend. However, I've discovered that the device I'd like to enable this facility for — a USB hard drive — does not have such a file in its corresponding directory, and that attempts to create it are rebuffed. Is there perhaps a kernel module that needs to be loaded for this to work? Do I need to build a custom kernel? I'm running ubuntu 10.10.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >