Search Results

Search found 70026 results on 2802 pages for 'file recovery'.

Page 79/2802 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • File extensions and MIME Types in .NET

    - by Marc Climent
    I want to get a MIME Content-Type from a given extension (preferably without accessing the physical file). I have seen some questions about this and the methods described to perform this can be resumed in: Use registry information. Use urlmon.dll's FindMimeFromData. Use IIS information. Roll your own MIME mapping function. Based on this table, for example. I've been using no.1 for some time but I realized that the information provided by the registry is not consistent and depends on the software installed on the machine. Some extensions, like .zip don't use to have a Content-Type specified. Solution no.2 forces me to have the file on disk in order to read the first bytes, which is something slow but may get good results. The third method is based on Directory Services and all that stuff, which is something I don't like much because I have to add COM references and I'm not sure it's consistent between IIS6 and IIS7. Also, I don't know the performance of this method. Finally, I didn't want to use my own table but at the end seems the best option if I want a decent performance and consistency of the results between platforms (even Mono). Do you think there's a better option than using my own table or one of other described methods are better? What's your experience?

    Read the article

  • File Format DOS/Unix/MAC code sample

    - by mac
    I have written the following method to detemine whether file in question is formatted with DOS/ MAC, or UNIX line endings. I see at least 1 obvious issue: 1. i am hoping that i will get the EOL on the first run, say within first 1000 bytes. This may or may not happen. I ask you to review this and suggest improvements which will lead to hardening the code and making it more generic. THANK YOU. new FileFormat().discover(fileName, 0, 1000); and then public void discover(String fileName, int offset, int depth) throws IOException { BufferedInputStream in = new BufferedInputStream(new FileInputStream(fileName)); FileReader a = new FileReader(new File(fileName)); byte[] bytes = new byte[(int) depth]; in.read(bytes, offset, depth); a.close(); in.close(); int thisByte; int nextByte; boolean isDos = false; boolean isUnix = false; boolean isMac = false; for (int i = 0; i < (bytes.length - 1); i++) { thisByte = bytes[i]; nextByte = bytes[i + 1]; if (thisByte == 10 && nextByte != 13) { isDos = true; break; } else if (thisByte == 13) { isUnix = true; break; } else if (thisByte == 10) { isMac = true; break; } } if (!(isDos || isMac || isUnix)) { discover(fileName, offset + depth, depth + 1000); } else { // do something clever } }

    Read the article

  • Websphere logs report {0} File not found, but application continues to work without issues

    - by Eric
    A websphere 6.1 server is running a struts application that seems to be working fine. In the logs, however, I'm seeing the following error message, which is being continually emailed to the support staff. com.ibm.ws.webcontainer.webapp.WebAppErrorReport: SRVE0190E: File not found: {0} at com.ibm.ws.webcontainer.webapp.WebAppDispatcherContext.sendError(WebAppDispatcherContext.java:536) at com.ibm.ws.webcontainer.srt.SRTServletResponse.sendError(SRTServletResponse.java:930) at com.ibm.ws.webcontainer.extension.DefaultExtensionProcessor.handleRequest(DefaultExtensionProcessor.java:524) at com.ibm.ws.wswebcontainer.extension.DefaultExtensionProcessor.handleRequest(DefaultExtensionProcessor.java:111) at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3129) at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:238) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:811) at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1433) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:93) I can narrow down the issue to a single Action and JSP, which are too big to show here, but here's the action definition in struts-config.xml: <action path="/HappyDefaultThing" name="HappyDefaultThingActionForm" type="com.foo.webadministration.action.HappyDefaultThingAction" validate="true" input="/WaAssignDefaultHappyThing.jsp" scope="session"> <forward name="success" path="/WaAssignDefaultHappyThing.jsp"/> <forward name="failure" path="/WaAssignDefaultHappyThing.jsp"/> </action> As far as I can see, nothing is missing, and everything necessary is being found, but the logs say "File not found: {0}" What is "{0}"?? The stack trace only shows IBMs code, which I can't see the source of, and therefore can't trace. Is this a bug in the websphere code? I'd appreciate any help.

    Read the article

  • Using fgets to read strings from file in C

    - by Ivan
    I am trying to read strings from a file that has each string on a new line but I think it reads a newline character once instead of a string and I don't know why. If I'm going about reading strings the wrong way please correct me. i=0; F1 = fopen("alg.txt", "r"); F2 = fopen("tul.txt", "w"); if(!feof(F1)) { do{ //start scanning file fgets(inimene[i].Enimi, 20, F1); fgets(inimene[i].Pnimi, 20, F1); fgets(inimene[i].Kood, 12, F1); printf("i=%d\nEnimi=%s\nPnimi=%s\nKaad=%s",i,inimene[i].Enimi,inimene[i].Pnimi,inimene[i].Kood); i++;} while(!feof(F1));}; /*finish getting structs*/ The printf is there to let me see what was read into what and here is the result i=0 Enimi=peter Pnimi=pupkin Kood=223456iatb i=1 Enimi= Pnimi=masha Kaad=gubkina i=2 Enimi=234567iasb Pnimi=sasha Kood=dudkina As you can see after the first struct is read there is a blank(a newline?) onct and then everything is shifted. I suppose I could read a dummy string to absorb that extra blank and then nothing would be shifted, but that doesn't help me understand the problem and avoid in the future.

    Read the article

  • Append data to file in iPhone app

    - by zp26
    I have a problem. I want to create a file like XML. I have create a code for this but the file not append the information but rewrite and save only the last NSData. Can you help me/ This is my code -(void)salvataggioInXML:(NSString*)name:(float)x:(float)y:(float)z{ NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectoryPath = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectoryPath stringByAppendingPathComponent:@"filePosizioni.xml"]; NSFileHandle *myHandle = [NSFileHandle fileHandleForUpdatingAtPath:filePath]; [myHandle seekToEndOfFile]; NSString *tagName = [NSString stringWithFormat:@"<name>%@<name>", name]; NSString *tagX = [NSString stringWithFormat:@"<x>%f<x>", name]; NSString *tagY = [NSString stringWithFormat:@"<y>%f<y>", name]; NSString *tagZ = [NSString stringWithFormat:@"<z>%f<z>", name]; NSData* dataName = [tagName dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataX = [tagX dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataY = [tagY dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataZ = [tagZ dataUsingEncoding: NSASCIIStringEncoding]; if ([dataName writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataX writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataY writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataZ writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; NSLog(@"zp26 %@",filePath); }

    Read the article

  • How to load an HTML file into an included PHP file from another PHP?

    - by Peter NGM
    i have 2 PHP file and 1 HTML. i want to include file2.php in file1.php. file1.php is: <html> <head>...</head> <body> ... <?php include("file2.php"); ?> ... </body> </html> i want to load an HTML file in file2.php: file2.php is: <?php $doc = new DOMDocument(); $doc->loadHTMLFile("sample.html"); echo $doc->saveHTML(); ?> and sample.html is: ... <b>Hello World!</b> ... my problem is this error: Warning: DOMDocument::loadHTMLFile() [domdocument.loadhtmlfile]: I/O warning : failed to load external entity "sample.html" in C:\xampp\htdocs\project\file2.php on line 3 please help me to solve this problem.

    Read the article

  • max file upload size change in web.config

    - by Christopher Johnson
    using .net mvc 3 and trying to increase the allowable file upload size. This is what I've added to web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" preCondition="managedHandler" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" /> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" preCondition="managedHandler" /> </modules> <handlers> <add name="Elmah" path="elmah.axd" verb="POST,GET,HEAD" type="Elmah.ErrorLogPageFactory, Elmah" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="104857600"/> </requestFiltering> </security> *ignore the elmah stuff. it's still not allowing file sizes larger than 50MB and this should allow up to 100MB no? any ideas?

    Read the article

  • find and replace values in a flat-file using PHP

    - by peirix
    I'd think there was a question on this already, but I can't find one. Maybe the solution is too easy... Anyway, I have a flat-file and want to let the user change the values based on a name. I've already sorted out creating new name+value-pairs using the fopen('a') mode, using jQuery to send the AJAX call with newValue and newName. But say the content looks like this: host|http:www.stackoverflow.com folder|/questions/ folder2|/users/ And now I want to change the folder value. So I'll send in folder as oldName and /tags/ as newValue. What's the best way to overwrite the value? The order in the list doesn't matter, and the name will always be on the left, followed by a |(pipe), the value and then a new-line. My first thought was to read the list, store it in an array, search all the [0]'s for oldName, then change the [1] that belongs to it, and then write it back to a file. But I feel there is a better way around this? Any ideas? Maybe regex?

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • MalformedURLException with file URI

    - by Paul Reiners
    While executing the following code: doc = builder.parse(file); where doc is an instance of org.w3c.dom.Document and builder is an instance of javax.xml.parsers.DocumentBuilder, I'm getting the following exception: Exception in thread "main" java.net.MalformedURLException: unknown protocol: c at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(Unknown Source) at com.acme.ItemToThetaValues.createFiles(ItemToThetaValues.java:47) It's choking on this line of the file: <!DOCTYPE questestinterop SYSTEM "C:\Program Files\Acme\parsers\acme_full.dtd"> I am not getting this error on my machine, while a user is getting it on his machine. We are both using version 6 of the Sun JRE. This error also occurs when he's uses double backslashes in the path instead of single backslashes and when he uses forward slashes instead of backslashes. First of all, is the XML correct? Is the path expressed correctly? Second of all, why is this error occurring on one computer but not on another?

    Read the article

  • Looking for a very simple file-based CMS

    - by nfm
    I'm building a site for a friend for free, and am trying to work out a good way for her to be able to easily make updates. I haven't used any CMSs before. I was browsing the web today looking at some, and they all seem way too complicated for what I'm after. Basically, all I want is a really simple CMS that pulls together HTML snippets in particular subdirectories, and wraps them in header/footer HTML and inserts them into a template page in the appropriate section. I'm imagining a site layout something like this: / /index.php /blog_template.php /news_template.php /blog/ /blog/header.php /blog/footer.php /blog/my-first-blog.html /blog/blogs-rule.html /blog/... Say index.php contains div#blog. PHP would wrap each /blog/*.html file in /blog/header.php and blog/footer.php, and insert them into the div#blog as div#blog([0-9]*). I haven't been able to find anything this basic, and am one step away from throwing something together myself, but I'm a bit short on time at the moment and figured I'd post here first. Has anyone come across something like this? I don't want any DB, extensions, user accounts, installation, config, updates... just a simple file based solution. Thanks :) Forgot to mention - needs to be FOSS and run on Linux!

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Looking for the log file on iPhone

    - by zp26
    Hi, I have a problem. I have the code below for save a data on file. I build my app on the device, and run. The result variable is TRUE, but i don't find the file on the my iPhone device. Can you help me? Thank and sorry for my english XP -(void)saveXML:(NSString*)name:(float)x:(float)y:(float)z{ NSMutableData *data = [NSMutableData data]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [archiver setOutputFormat:NSPropertyListXMLFormat_v1_0]; [archiver encodeFloat:x forKey:@"x"]; [archiver encodeFloat:y forKey:@"y"]; [archiver encodeFloat:z forKey:@"z"]; [archiver encodeObject:name forKey:@"name"]; [archiver finishEncoding]; NSString* filePath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:@"XML Position"]; BOOL result = [data writeToFile:filePath atomically:YES]; if(result) [self updateTextView:@"success"]; [archiver release]; }

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • how to write binary copy of structure array to file

    - by cerr
    I would like to write a binary image of a structure array to a binary file. I have tried this so far: #include <stdio.h> #include <string.h> #define NUM 256 const char *fname="binary.bin"; typedef struct foo_s { int intA; int intB; char string[20]; }foo_t; void main (void) { foo_t bar[NUM]; bar[0].intA = 10; bar[0].intB = 999; strcpy(bar[0].string,"Hello World!"); Save(bar); printf("%s written succesfully!\n",fname); } int Save(foo_t* pData) { FILE *pFile; int ptr = 0; int itr = 0; pFile = fopen(fname, "w"); if (pFile == NULL) { printf("couldn't open %s\n", fname); return; } for (itr = 0; itr<NUM; itr++) { for (ptr=0; ptr<sizeof(foo_t); ptr++) { fputc((unsigned char)*((&pData[itr])+ptr), pFile); } fclose(pFile); } } but the compiler is saying aggregate value used where an integer was expected fputc((unsigned char)*((&pData[itr])+ptr), pFile); and I don't quite understand why, what am I doing wrong? Thanks!

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • MySQL Server - Got error -1 from storage engine

    - by Bobby
    I am currently trying to restore a MySQL table from the .ibd file. I have been following the instructions on the MySQL reference manual on how to use DISCARD and IMPORT TABLESPACE to replace the .idb files. Discarding the tablespace returns no error and the file is deleted however IMPORTING the replacement .ibd file yields a "Got error -1 from storage engine" error. There doesn't seem to be too much information about what exactly an error -1 is. Does anybody have any further insight as to why an import table space isn't working?

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • Recovered video files won't play

    - by BioGeek
    I have an SD card with pictures and video which malfunctioned. I was able to recover the files with Photorec. The pictures are OK, but wen I try to open the vide files (*.mov extension) in get the following errors when I try to open them in the following programs Windows Media player: "Windows Media Player encountered a problem while playing the file" Quicktime: "Error -2048: Couldn't open the file because it is not a file that QuickTime understands" VLC: it shows the first frame of the video and the sound is just white noise The filesizes look correct so I presume the data is still in there. Is there any way to fix these recovered video files?

    Read the article

  • How do I force an exchange database to become "active"?

    - by makerofthings7
    We had a catastrophic failure where all that remains is a single edb file. No backups. No log files. The database that remains is on the "passive" copy. The "active" copy is missing, but the server is active. The Exchange console reports that the edb file needs to be reseeded, however there is no source to reseed from. How do I make the "invalid" database file (missing logs) valid? How do I make exchange recognize this as a valid database to use as a primary?

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >