Search Results

Search found 3861 results on 155 pages for 'evented io'.

Page 28/155 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Writing a plist

    - by iOS-Newbie
    I am trying to test out writing a dictionary to a plist. The following code does not report any errors, but I cannot find any trace of the file that I supposed wrote. Here is the code snippet: NSDictionary *myDictionary = [NSDictionary dictionaryWithObjectsAndKeys: @"First letter of the alphabet", @"A", @"Second letter of the alphabet", @"B", @"Third letter of the alphabet", @"C", nil ]; I can see the dictionary contents displayed properly with either method calls: NSLog(@"Here is my partial dictionary %@", myDictionary); for (NSString *key in myDictionary) NSLog(@"here it is again %@ %@", key, [myDictionary objectForKey:key]); The following code displays the "succeeded" message when the program is run repeatedly if ([myDictionary writeToFile: @"myDictionary" atomically:YES ] == NO) NSLog(@"write to file failed"); else NSLog(@"write to file succeeded"); even when changing the atomically: argument to NO to not write a temporary file. However, when I search my current directory, or even my entire Mac, I cannot find any file called "myDictionary.plist" or any file with the string "myDictionary". Isn't the path variable "@myDictionary" supposed to represent the file at the current directory, i.e. where the class executable resides?

    Read the article

  • .NET make a copy of an embedded file resource to the local drive

    - by Matt H.
    Hi, i'm new to the realm to working with Files in .NET I'm creating a WPF application in VB.NET with the 3.5 Framework. (If you provide an example in C#, that's perfectly fine.) In my project I have a Template for an MS Access database. My desired behavior is that when the users clicks File--New, they can create a new copy of this template, give it a filename, and save it to their local directory. The database already has the tables and some starting data needed to interface with my application (a user-friendly data editor) I'm thinking the approach is to include this "template.accdb" file as a resource in the project, and write it to a file somehow at runtime? Any guidance will be very, very appreciated. Thanks!

    Read the article

  • Python: How to write data in file in specific format?

    - by sasha
    i have an array called MAC1_Val: MAC1_Val array([ 1.00000000e+00, -1.00000000e+01, -2.06306600e+02, 2.22635749e+02, 1.00000000e+00, 1.00000000e+01, 1.00000000e+01, -2.06306600e+02, 2.22635749e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.00000000e+00, -1.08892735e+01, 1.88607749e+01, 1.03153300e+01, -1.78666757e+01, 3.33333333e-07, -3.33333333e-07, -4.21637021e-05, 4.21637021e-05, 9.98844400e-01, -1.73973001e-03, 1.20938900e-03, 1.87742948e-03, -3.33333333e-03, 6.66666667e-03, -3.33333333e-03, -2.64911064e-01, -2.60959501e+01, 2.81614422e+01, 3.33333333e-03, -6.66666667e-03, 3.33333333e-03, 0.00000000e+00, 0.00000000e+00]) and i want to write in file (.txt) values in specific format like this: 1.000000e+00 -1.000000e+01 -2.063066e+02 2.226357e+02 1.000000e+00 1.000000e+01 ....... note that are 6 digits behind floating point any suggestions how to do this? thanks in advance!

    Read the article

  • Read from file in eclipse

    - by Buzkie
    I'm trying to read from a text file to input data to my java program. However, eclipse continuosly gives me a Source not found error no matter where I put the file. I've made an additional sources folder in the project directory, the file in question is in both it and the bin file for the project and it still can't find it. I even put a copy of it on my desktop and tried pointing eclipse there when it asked me to browse for the source lookup path. No matter what I do it can't find the file. here's my code in case it's pertinent: System.out.println(System.getProperty("user.dir")); File file = new File("file.txt"); Scanner scanner = new Scanner(file); in addition, it says the user directory is the project directory and there is a copy there too. I have no clue what to do. Thanks, Alex after attempting the suggestion below and refreshing again, I was greeted by a host of errors. FileNotFoundException(Throwable).<init>(String) line: 195 FileNotFoundException(Exception).<init>(String) line: not available FileNotFoundException(IOException).<init>(String) line: not available FileNotFoundException.<init>(String) line: not available URLClassPath$JarLoader.getJarFile(URL) line: not available URLClassPath$JarLoader.access$600(URLClassPath$JarLoader, URL) line: not available URLClassPath$JarLoader$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath$JarLoader.ensureOpen() line: not available URLClassPath$JarLoader.<init>(URL, URLStreamHandler, HashMap) line: not available URLClassPath$3.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath.getLoader(URL) line: not available URLClassPath.getLoader(int) line: not available URLClassPath.access$000(URLClassPath, int) line: not available URLClassPath$2.next() line: not available URLClassPath$2.hasMoreElements() line: not available ClassLoader$2.hasMoreElements() line: not available CompoundEnumeration<E>.next() line: not available CompoundEnumeration<E>.hasMoreElements() line: not available ServiceLoader$LazyIterator.hasNext() line: not available ServiceLoader$1.hasNext() line: not available LocaleServiceProviderPool$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] LocaleServiceProviderPool.<init>(Class<LocaleServiceProvider>) line: not available LocaleServiceProviderPool.getPool(Class<LocaleServiceProvider>) line: not available NumberFormat.getInstance(Locale, int) line: not available NumberFormat.getNumberInstance(Locale) line: not available Scanner.useLocale(Locale) line: not available Scanner.<init>(Readable, Pattern) line: not available Scanner.<init>(ReadableByteChannel) line: not available Scanner.<init>(File) line: not available code used: System.out.println(System.getProperty("user.dir")); File file = new File(System.getProperty("user.dir") + "/file.txt"); Scanner scanner = new Scanner(file);

    Read the article

  • Data files from development machine to iOS device

    - by StoneBreaker
    My app has created a bunch of data files as development has progressed through the simulator. Their location is obtained by this function: NSString *pathInDocumentDirectory(NSString *fileName) { NSArray *documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentDirectory = [documentDirectories objectAtIndex: 0]; return [documentDirectory stringByAppendingPathComponent: fileName]; } The files are now required on the device as testing of the app is moving from the simulator to actual devices. How do I transfer the data files from my current working environment to the devices?

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Execute a Application On The Server Using PHP(With safe_mode enabled)

    - by Nathan Campos
    I have an application on my server that is called leaf.exe, that haves two arguments needed to run, they are: inputfile and outputfile, that will be like this example: pnote.exe input.pnt output.txt The executable is at exec/, inputfile is at upload/ and outputfile is on compiled/. But I need that a PHP could run the application like that, then I want to know: How could I do this on a server that have exec() disabled and I can't turn it on, because I don't have privileges to do it? How could I echo the output of the program?

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • Get license file from a folder in C# project

    - by daft
    I have a license file that I need to access at runtime in order to create pdf-files. After I have created the in memory pdf, I need to call a method on that pdf to set the license, like this: pdf.SetLicense("pathToLicenseFileHere"); The license file is located in the same project as the.cs-file that creates the pdf, but is in a separate folder. I cannot get this simple thing to behave correctly, which makes me a bit sad, since it really shouldn't be that hard. :( I try to set the path like this: string path = @"\Resources\File.lic"; But it just isn't working out for me.

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • Using scanf() in C++ programs is faster than using cin ?

    - by zeroDivisible
    Hello, I don't know if this is true, but when I was reading FAQ on one of the problem providing sites, I found something, that poke my attention: Check your input/output methods. In C++, using cin and cout is too slow. Use these, and you will guarantee not being able to solve any problem with a decent amount of input or output. Use printf and scanf instead. Can someone please clarify this? Is really using scanf() in C++ programs faster than using cin something ? If yes, that is it a good practice to use it in C++ programs? I thought that it was C specific, though I am just learning C++...

    Read the article

  • File Operations in Android NDK

    - by EnderX
    I am using the Android NDK to make an application primarily in C for performance reasons, but it appears that file operations such as fopen do not work correctly in Android. Whenever I try to use these functions, the application crashes. How do I create/write to a file with the Android NDK?

    Read the article

  • Program crashes after trying to use a recently created file. C#

    - by Jason T.
    So here is my code if (!File.Exists(pathName)) { File.Create(pathName); } StreamWriter outputFile = new StreamWriter(pathName,true); But whenever I run the program the first time the path with file gets created. However once I get to the StreamWriter line my program crashes because it says my fie is in use by another process. Is there something I'm missing between the File.Create and the StreamWriter statements?

    Read the article

  • Read a file with 2048 bytes

    - by Suresh S
    Guys i have a file which has only one line. The file has no encoding it is a simple text file with single line. For every 2048 byte in a line , there is new record of 151 byte (totally 13*151 byte = 1945 records + 85 byte empty space). similarly for the next 2048 bytes. What is the best file i/o to use? i am thinking of reading 2048 bytes from file and storing it in an array . while (offset < fileLength &&(numRead=in.read(recordChunks, offset,alength)) >= 0) { } how can i get from the read statement only 2048 bytes at a time . i am getting IndexOutofBoundException.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >