Search Results

Search found 16243 results on 650 pages for 'io language'.

Page 99/650 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • What is the fastest way to write hundreds of files to disk using C#?

    - by Ehsan
    My program should write hundreds of files to disk, received by external resources (network) each file is a simple document that I'm currently store it with the name of GUID in a specific folder but creating hundred files, writing, closing is a lengthy process. Is there any better way to store these amount of files to disk? I've come to a solution, but I don't know if it is the best. First, I create 2 files, one of them is like allocation table and the second one is a huge file storing all the content of my documents. But reading from this file would be a nightmare; maybe a memory-mapped file technique could help. Could working with 30GB or more create a problem?

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • basic file input using C

    - by user1781966
    So im working on learning how to do file I/O, but the book I'm using is terrible at teaching how to receive input from a file. Below is is their example of how to receive input from a file, but it doesn't work. I have copied it word for word, and it should loop through a list of names until it reaches the end of the file( or so they say in the book), but it doesn't. In fact if I leave the while loop in there, it doesn't print anything. #include <stdio.h> #include <conio.h> int main() { char name[10]; FILE*pRead; pRead=fopen("test.txt", "r"); if (pRead==NULL) { printf("file cannot be opened"); }else printf("contents of test.txt"); fscanf(pRead,"%s",name); while(!feof(pRead)) { printf("%s\n",name); fscanf(pRead, "%s", name); } getch(); } Even online, every beginners tutorial I see does some variation of this, but I can't seem to get it to work even a little bit.

    Read the article

  • How to close reconnect SocketIOClient on android?

    - by erginduran
    My problem is reconnect.I connect SocketIOClient.connect(..) in background service.I close service when internet connection is off.and I re-start service again connection on. How to close this reconnection?I don't want to reconnect SocketIOClient. Its my code: ConnectCallback mConnectCallback = new ConnectCallback() { @Override public void onConnectCompleted(Exception ex, SocketIOClient client) { if (ex != null) { ex.printStackTrace(); return; } client.setReconnectCallback(new ReconnectCallback() { @Override public void onReconnect() { // TODO Auto-generated method stub } }); client.setDisconnectCallback(new DisconnectCallback() { @Override public void onDisconnect(Exception arg0) { // TODO Auto-generated method stub } }); client.setErrorCallback(new ErrorCallback() { @Override public void onError(String arg0) { // TODO Auto-generated method stub } }); client.on("event", new EventCallback() { @Override public void onEvent(JSONArray jsonArray, Acknowledge acknowledge) { ///bla bla } }); ScreenChat.mClient = client; } };

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • Program crashes after trying to use a recently created file. C#

    - by Jason T.
    So here is my code if (!File.Exists(pathName)) { File.Create(pathName); } StreamWriter outputFile = new StreamWriter(pathName,true); But whenever I run the program the first time the path with file gets created. However once I get to the StreamWriter line my program crashes because it says my fie is in use by another process. Is there something I'm missing between the File.Create and the StreamWriter statements?

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Reading a simple Avro file from HDFS

    - by John Galt... who
    I am trying to do a simple read of an Avro file stored in HDFS. I found out how to read it when it is on the local file system.... FileReader reader = DataFileReader.openReader(new File(filename), new GenericDatumReader()); for (GenericRecord datum : fileReader) { String value = datum.get(1).toString(); System.out.println("value = " value); } reader.close(); My file is in HDFS, however. I cannot give the openReader a Path or an FSDataInputStream. How can I simply read an Avro file in HDFS? EDIT: I got this to work by creating a custom class (SeekableHadoopInput) that implements SeekableInput. I "stole" this from "Ganglion" on github. Still, seems like there would be a Hadoop/Avro integration path for this. Thanks

    Read the article

  • Calling class in Java after editing file used in as source for table

    - by user2892290
    I'm currently working on a project, I'll try to subrscibe first. I save data into text file, that I use as a source for browser of that data. The browser is based on table that contains the data. I have to rewrite the source file everytime I delete or edit data. That's where the problem comes in. After deleting or editing data I call a method to create the table again, but the table never creates. Is it possibly made by editing the file and calling the method right after that? If I restart my app the table is successfully created with right data. Take in note that I don't get any error message. This is the method I use for loading data from source file: try (BufferedReader input1 = new BufferedReader(new FileReader("./src/data.src"))) { int lines = 0; while (input1.read() != -1) { if (!(input1.readLine()).equals("")) { lines++; } } input1.close(); if (lines == 0) { JOptionPane.showMessageDialog(null, "No data to load, create a note first!"); new Writer().build(frame); } else { try (BufferedReader input = new BufferedReader(new FileReader("./src/data.src"))) { Game[] g = new Game[lines]; String currentLine; String[] help; int counter = 0; while (lines > 0) { currentLine = input.readLine(); help = currentLine.split("#"); g[counter] = new Game(help[0],help[1], help[2], help[3], help[4], help[5], help[6], help[7], help[8], help[9]); counter++; lines--; } input.close(); final JButton bButton = new backButton().create(frame, mPanel); build(g, frame, bButton); mPanel.add(panel); mPanel.add(panel2); mPanel.add(searchPanel); mPanel.add(bButton); bButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { frame.setCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); panel.removeAll(); frame.setCursor(Cursor.getDefaultCursor()); } }); mPanel.setPreferredSize(new Dimension(1000, 750)); panel.setBorder(new EmptyBorder(10, 10, 10, 10)); frame.setLayout(new FlowLayout()); frame.add(mPanel); frame.pack(); JMenuBar menuBar = new Menu().create(frame, mPanel); frame.setJMenuBar(menuBar); frame.setVisible(true); Rectangle rec = GraphicsEnvironment.getLocalGraphicsEnvironment().getMaximumWindowBounds(); int width = (int) rec.getWidth(); int height = (int) rec.getHeight(); frame.setBounds(1, 3, width, height); frame.addComponentListener(new ComponentAdapter() { @Override public void componentMoved(ComponentEvent e) { frame.setLocation(1, 3); } }); And this is the method I use for creating the table: String[][] tableData = new String[g.length][9]; for (int i = 0; i < tableData.length; i++) { tableData[i][0] = g[i].getChampion(); tableData[i][1] = g[i].getRole(); tableData[i][2] = g[i].getEnemy(); tableData[i][3] = g[i].getDifficulty(); tableData[i][4] = g[i].getResult(); tableData[i][5] = g[i].getScore(); tableData[i][6] = g[i].getGameType(); tableData[i][7] = g[i].getPoints(); tableData[i][8] = g[i].getLeague(); } final JLabel searchLabel = new JLabel("Search for champion played."); final JButton searchButton = new JButton("Search"); final JTextField searchText = new JTextField(20); frame.setTitle("LoL Notepad - reading your notes"); JTable table = new JTable(tableData, columnNames); final JScrollPane scrollPane = new JScrollPane(table); scrollPane.setPreferredSize(new Dimension(980, 500)); panel2.setPreferredSize(new Dimension(1000, 550)); panel2.setVisible(false); panel2.setBorder(new EmptyBorder(10, 10, 10, 10)); panel3.setVisible(false); panel.setLayout(new FlowLayout()); panel.add(scrollPane); searchPanel.add(searchLabel); searchPanel.add(searchText); searchPanel.add(searchButton); searchButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { try { frame.setCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); search(g, searchText.getText(), frame, bButton); frame.setCursor(Cursor.getDefaultCursor()); } catch (IOException ex) { Logger.getLogger(Reader.class.getName()).log(Level.SEVERE, null, ex); } } }); table.addMouseListener(new MouseAdapter() { @Override public void mousePressed(MouseEvent e) { if (e.getClickCount() == 1) { JTable target = (JTable) e.getSource(); panel.setVisible(false); searchPanel.setVisible(false); bButton.setVisible(false); int row = target.getSelectedRow(); specific(row, g, frame, bButton); } } });

    Read the article

  • [NSData dataWithContentsOfFile:path] doesn't work

    - by Felics
    Hello, when I have the fallowing code to read a binary file: NSString* file = [NSString stringWithUTF8String:fileName]; NSString* filePath = resource ? [[NSBundle mainBundle] pathForResource:file ofType:nil] : [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent: file]; NSData* fileData = [NSData dataWithContentsOfFile:filePath]; Where "fileName" and resource are load function parameters. "resource" is used to know if the file is located in application bundle or in Documents. Sometimes this code works well and sometimes it doesn't. As far I saw this problem is random. I can run the code 10 times in a row and it works fine and after that it gives me nil data without any modification. Does anybody knows what could be the problem? Could it be related with file extension or file name? Thank you. PS: I use this code on iPhone Simulator and the file exists in application bundle.

    Read the article

  • Unable to write to a text file

    - by chrissygormley
    Hello, I am running some tests and need to write to a file. When I run the test's the open = (file, 'r+') does not write to the file. The test script is below: class GetDetailsIP(TestGet): def runTest(self): self.category = ['PTZ'] try: # This run's and return's a value result = self.client.service.Get(self.category) mylogfile = open("test.txt", "r+") print >>mylogfile, result result = ("".join(mylogfile.readlines()[2])) result = str(result.split(':')[1].lstrip("//").split("/")[0]) mylogfile.close() except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals(): self.assertEquals(result, self.camera_ip) else: assert False When this test run's, no value has been entered into the text file and a value is returned in the variable result. I havw also tried mylogfile.write(result). If the file does not exist is claim's the file does not exist and doesn't create one. Could this be a permission problem where python is not allowed to create a file? I have made sure that all other read's to this file are closed so I the file should not be locked. Can anyone offer any suggestion why this is happening? Thanks

    Read the article

  • How to create custom filenames in C?

    - by eSKay
    Please see this piece of code: #include<stdio.h> #include<string.h> #include<stdlib.h> int main() { int i = 0; FILE *fp; for(i = 0; i < 100; i++) { fp = fopen("/*what should go here??*/","w"); //I need to create files with names: file0.txt, file1.txt, file2.txt etc //i.e. file{i}.txt } }

    Read the article

  • read chars from a file - c#

    - by Saskaaa
    How to read from a file array of numbers? I mean, how to read chars from a file? sorry for bad eng. upd: yes, i can :) just: "1 2 3 4 5 6 7 8" and etc. I just do not know how to read chars from a file.

    Read the article

  • Optimizing memory usage and changing file contents with PHP

    - by errata
    In a function like this function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'wb'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 1024)) === FALSE) { return false; } } fclose($rh); fclose($wh); return true; } what is the best way to rewrite last few bytes of a file with my custom string? Thanks!

    Read the article

  • File Operations in Android NDK

    - by EnderX
    I am using the Android NDK to make an application primarily in C for performance reasons, but it appears that file operations such as fopen do not work correctly in Android. Whenever I try to use these functions, the application crashes. How do I create/write to a file with the Android NDK?

    Read the article

  • using wild card when listing directories in python

    - by user248237
    how can I use wild cars like '*' when getting a list of files inside a directory in Python? for example, I want something like: os.listdir('foo/*bar*/*.txt') which would return a list of all the files ending in .txt in directories that have bar in their name inside of the foo parent directory. how can I do this? thanks.

    Read the article

  • Python f.write() at beginning of file?

    - by kristus
    I'm doing it like this now, but i want it to write at the beginning of the file instead. f = open('out.txt', 'a') # or 'w'? f.write("string 1") f.write("string 2") f.write("string 3") f.close() so that the contenst of out.txt will be: string 3 string 2 string 1 and not (like this code does): string 1 string 2 string 3

    Read the article

  • Is there an easier way to reconcile a list of files and a directory with subfolders/files to find ch

    - by rwmnau
    I have a SQL Server table with a list of files (path + filename), and a folder with multiple layers and files in each layer. I'm looking for a way to reconcile the two without having to process the list twice. Currently, I'm doing this: For Each f as FileInfo In FileListFromDatabase If f.Exists is False, mark it as deleted in the database Next For Each f as FileInfo In RecursiveListOFFilesOnDisk If Not FileExistsInDatabase, then add it Next Is there a better way to do this? I'd like to avoid converting all the matching files (of which most will be) to FileInfo objects twice. Since I'm a T-SQL developer first, I'm picturing something like an OUTER JOIN of the two lists where they don't match. Something LINQ-ish?

    Read the article

  • What is a good way of coding a file processing program, which accepts multisource data in Java

    - by jjepsuomi
    I'm making a data prosessing system, which currently is using csv-data as input and output form. In the future I might want to add support for example database-, xml-, etc. typed input and output forms. How should I desing my program so that it would be easy to add support for new type of data sources? Should simply make for example an abstract data class (which would contain the basic file prosessing methods) and then inherit this class for database, xml, etc. cases? Hope my question is clear =) In other words my question is: "How to desing a file prosessing system, which can be easily updated to accept input data from different sources (database, XML, Excel, etc.)".

    Read the article

  • How do I open a file in such a way that if the file doesn't exist it will be created and opened automatically?

    - by snakile
    Here's how I open a file for writing+ : if( fopen_s( &f, fileName, "w+" ) !=0 ) { printf("Open file failed\n"); return; } fprintf_s(f, "content"); If the file doesn't exist the open operation fails. What's the right way to fopen if I want to create the file automatically if the file doesn't already exist? EDIT: If the file does exist, I would like fprintf to overwrite the file, not to append to it.

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Reference table values in a war against magic numbers

    - by Alex N.
    This question bugged me for years now and can't seem to find good solution still. I working in PHP and Java but it sounds like this maybe language-agnostic :) Say we have a standard status reference table that holds status ids for some kind of entity. Further let's assume the table will have just 5 values, and will remain like this for a long time, maybe edited occasionally with addition of a new status. When you fetch a row and need to see what status it is you have 2 options(as I see it at least) - put it straight ID values(magic numbers that is) or use a named constant. Latter seem much cleaner, the question though is where those named constants should leave? In a model class? In a class that uses this particular constant? Somewhere else?

    Read the article

  • write cache and write sequence order

    - by excanoe
    ok, here i have some weird question: let say we have some binary file (.log), and sequence of write operations, for example log1, log2, log3 and each has some block size n (raw data). question: can I be sure that log1,log2 and log3 sequences can be written in the correct order in ONE file, even if i have few cache levels (disk hardware and os level)? update very interested in what will be with records order (not with records) if we have software or hardware failure (reboot or another reason). update there can be some percent of write failures, but main question is: will write order stay correct?

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >