Search Results

Search found 78189 results on 3128 pages for 'file management'.

Page 97/3128 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Using fgets to read strings from file in C

    - by Ivan
    I am trying to read strings from a file that has each string on a new line but I think it reads a newline character once instead of a string and I don't know why. If I'm going about reading strings the wrong way please correct me. i=0; F1 = fopen("alg.txt", "r"); F2 = fopen("tul.txt", "w"); if(!feof(F1)) { do{ //start scanning file fgets(inimene[i].Enimi, 20, F1); fgets(inimene[i].Pnimi, 20, F1); fgets(inimene[i].Kood, 12, F1); printf("i=%d\nEnimi=%s\nPnimi=%s\nKaad=%s",i,inimene[i].Enimi,inimene[i].Pnimi,inimene[i].Kood); i++;} while(!feof(F1));}; /*finish getting structs*/ The printf is there to let me see what was read into what and here is the result i=0 Enimi=peter Pnimi=pupkin Kood=223456iatb i=1 Enimi= Pnimi=masha Kaad=gubkina i=2 Enimi=234567iasb Pnimi=sasha Kood=dudkina As you can see after the first struct is read there is a blank(a newline?) onct and then everything is shifted. I suppose I could read a dummy string to absorb that extra blank and then nothing would be shifted, but that doesn't help me understand the problem and avoid in the future.

    Read the article

  • Append data to file in iPhone app

    - by zp26
    I have a problem. I want to create a file like XML. I have create a code for this but the file not append the information but rewrite and save only the last NSData. Can you help me/ This is my code -(void)salvataggioInXML:(NSString*)name:(float)x:(float)y:(float)z{ NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectoryPath = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectoryPath stringByAppendingPathComponent:@"filePosizioni.xml"]; NSFileHandle *myHandle = [NSFileHandle fileHandleForUpdatingAtPath:filePath]; [myHandle seekToEndOfFile]; NSString *tagName = [NSString stringWithFormat:@"<name>%@<name>", name]; NSString *tagX = [NSString stringWithFormat:@"<x>%f<x>", name]; NSString *tagY = [NSString stringWithFormat:@"<y>%f<y>", name]; NSString *tagZ = [NSString stringWithFormat:@"<z>%f<z>", name]; NSData* dataName = [tagName dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataX = [tagX dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataY = [tagY dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataZ = [tagZ dataUsingEncoding: NSASCIIStringEncoding]; if ([dataName writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataX writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataY writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataZ writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; NSLog(@"zp26 %@",filePath); }

    Read the article

  • How to load an HTML file into an included PHP file from another PHP?

    - by Peter NGM
    i have 2 PHP file and 1 HTML. i want to include file2.php in file1.php. file1.php is: <html> <head>...</head> <body> ... <?php include("file2.php"); ?> ... </body> </html> i want to load an HTML file in file2.php: file2.php is: <?php $doc = new DOMDocument(); $doc->loadHTMLFile("sample.html"); echo $doc->saveHTML(); ?> and sample.html is: ... <b>Hello World!</b> ... my problem is this error: Warning: DOMDocument::loadHTMLFile() [domdocument.loadhtmlfile]: I/O warning : failed to load external entity "sample.html" in C:\xampp\htdocs\project\file2.php on line 3 please help me to solve this problem.

    Read the article

  • max file upload size change in web.config

    - by Christopher Johnson
    using .net mvc 3 and trying to increase the allowable file upload size. This is what I've added to web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" preCondition="managedHandler" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" /> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" preCondition="managedHandler" /> </modules> <handlers> <add name="Elmah" path="elmah.axd" verb="POST,GET,HEAD" type="Elmah.ErrorLogPageFactory, Elmah" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="104857600"/> </requestFiltering> </security> *ignore the elmah stuff. it's still not allowing file sizes larger than 50MB and this should allow up to 100MB no? any ideas?

    Read the article

  • find and replace values in a flat-file using PHP

    - by peirix
    I'd think there was a question on this already, but I can't find one. Maybe the solution is too easy... Anyway, I have a flat-file and want to let the user change the values based on a name. I've already sorted out creating new name+value-pairs using the fopen('a') mode, using jQuery to send the AJAX call with newValue and newName. But say the content looks like this: host|http:www.stackoverflow.com folder|/questions/ folder2|/users/ And now I want to change the folder value. So I'll send in folder as oldName and /tags/ as newValue. What's the best way to overwrite the value? The order in the list doesn't matter, and the name will always be on the left, followed by a |(pipe), the value and then a new-line. My first thought was to read the list, store it in an array, search all the [0]'s for oldName, then change the [1] that belongs to it, and then write it back to a file. But I feel there is a better way around this? Any ideas? Maybe regex?

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • MalformedURLException with file URI

    - by Paul Reiners
    While executing the following code: doc = builder.parse(file); where doc is an instance of org.w3c.dom.Document and builder is an instance of javax.xml.parsers.DocumentBuilder, I'm getting the following exception: Exception in thread "main" java.net.MalformedURLException: unknown protocol: c at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(Unknown Source) at com.acme.ItemToThetaValues.createFiles(ItemToThetaValues.java:47) It's choking on this line of the file: <!DOCTYPE questestinterop SYSTEM "C:\Program Files\Acme\parsers\acme_full.dtd"> I am not getting this error on my machine, while a user is getting it on his machine. We are both using version 6 of the Sun JRE. This error also occurs when he's uses double backslashes in the path instead of single backslashes and when he uses forward slashes instead of backslashes. First of all, is the XML correct? Is the path expressed correctly? Second of all, why is this error occurring on one computer but not on another?

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Looking for the log file on iPhone

    - by zp26
    Hi, I have a problem. I have the code below for save a data on file. I build my app on the device, and run. The result variable is TRUE, but i don't find the file on the my iPhone device. Can you help me? Thank and sorry for my english XP -(void)saveXML:(NSString*)name:(float)x:(float)y:(float)z{ NSMutableData *data = [NSMutableData data]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [archiver setOutputFormat:NSPropertyListXMLFormat_v1_0]; [archiver encodeFloat:x forKey:@"x"]; [archiver encodeFloat:y forKey:@"y"]; [archiver encodeFloat:z forKey:@"z"]; [archiver encodeObject:name forKey:@"name"]; [archiver finishEncoding]; NSString* filePath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:@"XML Position"]; BOOL result = [data writeToFile:filePath atomically:YES]; if(result) [self updateTextView:@"success"]; [archiver release]; }

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • how to write binary copy of structure array to file

    - by cerr
    I would like to write a binary image of a structure array to a binary file. I have tried this so far: #include <stdio.h> #include <string.h> #define NUM 256 const char *fname="binary.bin"; typedef struct foo_s { int intA; int intB; char string[20]; }foo_t; void main (void) { foo_t bar[NUM]; bar[0].intA = 10; bar[0].intB = 999; strcpy(bar[0].string,"Hello World!"); Save(bar); printf("%s written succesfully!\n",fname); } int Save(foo_t* pData) { FILE *pFile; int ptr = 0; int itr = 0; pFile = fopen(fname, "w"); if (pFile == NULL) { printf("couldn't open %s\n", fname); return; } for (itr = 0; itr<NUM; itr++) { for (ptr=0; ptr<sizeof(foo_t); ptr++) { fputc((unsigned char)*((&pData[itr])+ptr), pFile); } fclose(pFile); } } but the compiler is saying aggregate value used where an integer was expected fputc((unsigned char)*((&pData[itr])+ptr), pFile); and I don't quite understand why, what am I doing wrong? Thanks!

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • Hour-long shutdown duration "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE Ok so i left it hanging over an hour while attending to other matters, and thankfully the host cleanly restarted. I can operate the guest machines fine now. Phew. Hyper-V must have been crawling for some reason. The VMs have been observed to become slow in the past when the host has been up for a long duration (two weeks to a month), but never this slow. Would love to know what types of performance monitoring items i can observe to give a hint why this can happen. UPDATE 2012-02-13 In the months ever since, Hyper-V has stalled into this state another two times. It appears so randomly and without any error event logs to hint what is causing it enter this "drunkard" state. Just an Hyper-V management service timeout. Log Name: System Source: Service Control Manager Date: 13/2/2012 9:16:48 AM Event ID: 7043 Task Category: None Level: Error Keywords: Classic User: N/A Computer: elune Description: The Hyper-V Virtual Machine Management service did not shut down properly after receiving a preshutdown control. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7043</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2012-02-13T01:16:48.882901900Z" /> <EventRecordID>567844</EventRecordID> <Correlation /> <Execution ProcessID="764" ThreadID="8484" /> <Channel>System</Channel> <Computer>elune</Computer> <Security /> </System> <EventData> <Data Name="param1">Hyper-V Virtual Machine Management</Data> </EventData> </Event> The only means out of it is to restart the system.

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

  • Programmatically convert *.odt file to MS Word *.doc file using an OpenOffice.org basic macro

    - by Chen Levy
    I am trying to build a reStructuredText to MS Word document tool-chain, so I will be able to save only the rst sources in version control. So far I -- Have rst2odt.py to convert reStructuredText to OpenOffice.org Writer format. Next I want to use the most recent OpenOffice.org (currently 3.1) that do a pretty decent work of generating a Word 97/2000/XP document, so I wrote the macro: sub ConvertToWord(file as string) rem ---------------------------------------------------------------------- rem define variables dim document as object dim dispatcher as object rem ---------------------------------------------------------------------- rem get access to the document document = ThisComponent.CurrentController.Frame dispatcher = createUnoService("com.sun.star.frame.DispatchHelper") rem ---------------------------------------------------------------------- dim odf(1) as new com.sun.star.beans.PropertyValue odf(0).Name = "URL" odf(0).Value = "file://" + file + ".odt" odf(1).Name = "FilterName" odf(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:Open", "", 0, odf()) rem ---------------------------------------------------------------------- dim doc(1) as new com.sun.star.beans.PropertyValue doc(0).Name = "URL" doc(0).Value = "file://" + file + ".doc" doc(1).Name = "FilterName" doc(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:SaveAs", "", 0, doc()) end sub But when I executing it: soffice "macro:///Standard.Module1.ConvertToWord(/path/to/odt_file_wo_ext)" I get a: "BASIC runtime error. Property or method not found." message On the line: document = ThisComponent.CurrentController.Frame And when I comment that line, the above invocation complete without error, but do nothing. I guess I need to somehow set the value of document to a newly created instance, but I don't know how to do it. Or am I going at it at a completely backward way? P.S. I will consider JODConverter as a fallback, because I try to minimize my dependencies.

    Read the article

  • allow file download to all types of files

    - by Avinash
    hi i have given my user to upload nay types of files. But my problem is that how can i force user top just download any type of files? Since pdf, jpg and text files are directly viewable to browser. So i want that any type of file should be downloaded to view. Running on php Thanks Avinash

    Read the article

  • How to quickly determine whether a file is an image file using iPhone/iPad SDK

    - by Josh Bleecher Snyder
    If I have a (potentially largish) file on disk, and I want to determine quickly whether UIImage will be able to load it. I don't necessarily trust the file extension to be reliable; I need to look at the actual data. I can (of course) load it into a UIImage, but that's relatively slow and rather memory intensive. I'd rather just peek at the first chunk of the file and make a decision. What's the fastest, most efficient way to go about this that is still fairly reliable? (Ideally, it'd be an Apple-provided API, but I didn't turn one up in my searches.) A 99.9% solution is good enough; I'm willing to have false positives in rare cases, such as when an image file has been truncated.

    Read the article

  • Send MIME-encoded file as e-mail in C#

    - by Matthias
    Hi folks, I've got the problem that I have a MIME-encoded file with all relevant mail information (subject, from, to, ...) and want to send it over a defined SMTP server via C#. I've looked at the MailMessage class and searched for a solution, but I couldn't find something fitting. Are you able to help me? Thanks, Matthias

    Read the article

  • Java FileInputStream ObjectInputStream reaches end of file EOF

    - by user69514
    I am trying to read the number of line in a binary file using readObject, but I get IOException EOF. Am I doing this the right way? FileInputStream istream = new FileInputStream(fileName); ObjectInputStream ois = new ObjectInputStream(istream); /** calculate number of items **/ int line_count = 0; while( (String)ois.readObject() != null){ line_count++; }

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >