Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 61/267 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • fopen() fails to open stream: permission denied, yet permissions should be valid

    - by about blank
    So, I have this error: Warning: fopen(/path/to/test-in.txt) [function.fopen]: failed to open stream: Permission denied Performing ls -l in the directory where test-in.txt is produces the following output: -rw-r--r-- 1 $USER $USER 1921 Sep 6 20:09 test-in.txt -rw-r--r-- 1 $USER $USER 0 Sep 6 20:08 test-out.txt In order to get past this, I decided to perform the following: chgrp -R www-data /path/to/php/webroot And then did: chmod g+rw /path/to/php/webroot Yet, I still get this error when I run my php5 script to open the file. Why is this happening? I've tried this using LAMP as well as cherokee through CGI, so it can't be this. Is there a solution of some sort? Edit I'll also add that I'm just developing via localhost right now. Update - PHP fopen() line $fullpath = $this->fileRoot . $this->fileInData['fileName']; $file_ptr = fopen( $fullpath, 'r+' ); I should also mention I'd like to stick with Cherokee if possible. What's this deal about setting file permissions for Apache/Cherokee?

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • Writing Strings to files in python

    - by Leif Andersen
    I'm getting the following error when trying to write a string to a file in pythion: Traceback (most recent call last): File "export_off.py", line 264, in execute save_off(self.properties.path, context) File "export_off.py", line 244, in save_off primary.write(file) File "export_off.py", line 181, in write variable.write(file) File "export_off.py", line 118, in write file.write(self.value) TypeError: must be bytes or buffer, not str I basically have a string class, which contains a string: class _off_str(object): __slots__ = 'value' def __init__(self, val=""): self.value=val def get_size(self): return SZ_SHORT def write(self,file): file.write(self.value) def __str__(self): return str(self.value) Furthermore, I'm calling that class like this: def write(self, file): for variable in self.variables: variable.write(file) I have no idea what is going on. I've seen other python programs writing strings to files, so why can't this one? Thank you very much for your help.

    Read the article

  • Reading in data from a file into an array

    - by Sam
    If I have an options file along the lines of this: size = 4 data = 1100010100110010 And I have a 2d size * size array that I want to populate the values in data into, what's the best way of doing it? To clarify, for the example I have I'd want an array like this: int[4][4] array = {{1,1,0,0}, {0,1,0,1}, {0,0,1,1}, {0,0,1,0}}. (Not real code but you get the idea). Size can be really be any number though. I'm thinking I'd have to read in the size, maloc an array and then maybe read in a string full of data then loop through each char in the data, cast it to an int and stick it in the appropriate index? But I really have no idea how to go about it, have been searching for a while with no luck. Any help would be cool! :)

    Read the article

  • Accessing a file (for writing) from a JBoss Web Service

    - by Andreas Grech
    Let's say I have this structure of my Java Web Application: TheProject -- [Web Pages] -- -- abc.txt -- -- index.jsp -- [Source Packages] -- -- [wservices] -- -- -- WS.java WS.java is my Web Service, which is situated in a wservices package. Now from this service, I need to access the abc.txt file and write to it. These are my urls: http://127.0.0.1:8080/TheProject/WS <- the webservice http://127.0.0.1:8080/TheProject/abc.txt <- the file I want to access To read the file, I tried with getResourceAsStream and I was successful in reading from it. But now I also want to write to this file, and I tried such a method but failed. Is there a way I can get access to the abc.txt file from WS.java and be able to successfully read from and write to it?

    Read the article

  • In OCaml, how can I create an out_channel which writes to a string/buffer instead of a file on disk

    - by Tianyi Cui
    I have a function of type in_channel -> out_channel -> unit which will output something to an out_channel. Now I'd like to get its output as a string. Creating temporary files to write and read it back seems ugly, so how can I do that? Is there any other methods to create out_channel besides Pervasives.open_out family? Actually, this function implemented a repl. What I really need is to test it programmatically, so I'd like to first wrap it to a function of type string -> string. For creating the in_channel, it seems I can use Scanf.Scanning.from_string, but I don't know how to create the out_channel parameter.

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • [C] Read line from file without knowing the line length.

    - by ryyst
    Hi, I want to read in a file line by line, without knowing the line length before. Here's what I got so far: int ch = getc(file); int length = 0; char buffer[4095]; while (ch != '\n' && ch != EOF) { ch = getc(file); buffer[length] = ch; length++; } printf("Line length: %d characters.", length); I can now figure out the line length, but only for lines that are shorter than 4095 characters. Is there a better way to do this (I already used fgets() but got told it wasn't the best way)? --Ry

    Read the article

  • Likelihood of IOError during print vs. write

    - by jkasnicki
    I recently encountered an IOError writing to a file on NFS. There wasn't a disk space or permission issue, so I assume this was just a network hiccup. The obvious solution is to wrap the write in a try-except, but I was curious whether the implementation of print and write in Python make either of the following more or less likely to raise IOError: f_print = open('print.txt', 'w') print >>f_print, 'test_print' f_print.close() vs. f_write = open('write.txt', 'w') f_write.write('test_write\n') f_write.close() (If it matters, specifically in Python 2.4 on Linux).

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Optimizing memory usage and changing file contents with PHP

    - by errata
    In a function like this function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'wb'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 1024)) === FALSE) { return false; } } fclose($rh); fclose($wh); return true; } what is the best way to rewrite last few bytes of a file with my custom string? Thanks!

    Read the article

  • Is there an easier way to reconcile a list of files and a directory with subfolders/files to find ch

    - by rwmnau
    I have a SQL Server table with a list of files (path + filename), and a folder with multiple layers and files in each layer. I'm looking for a way to reconcile the two without having to process the list twice. Currently, I'm doing this: For Each f as FileInfo In FileListFromDatabase If f.Exists is False, mark it as deleted in the database Next For Each f as FileInfo In RecursiveListOFFilesOnDisk If Not FileExistsInDatabase, then add it Next Is there a better way to do this? I'd like to avoid converting all the matching files (of which most will be) to FileInfo objects twice. Since I'm a T-SQL developer first, I'm picturing something like an OUTER JOIN of the two lists where they don't match. Something LINQ-ish?

    Read the article

  • Reading a simple Avro file from HDFS

    - by John Galt... who
    I am trying to do a simple read of an Avro file stored in HDFS. I found out how to read it when it is on the local file system.... FileReader reader = DataFileReader.openReader(new File(filename), new GenericDatumReader()); for (GenericRecord datum : fileReader) { String value = datum.get(1).toString(); System.out.println("value = " value); } reader.close(); My file is in HDFS, however. I cannot give the openReader a Path or an FSDataInputStream. How can I simply read an Avro file in HDFS? EDIT: I got this to work by creating a custom class (SeekableHadoopInput) that implements SeekableInput. I "stole" this from "Ganglion" on github. Still, seems like there would be a Hadoop/Avro integration path for this. Thanks

    Read the article

  • What is a good way of coding a file processing program, which accepts multisource data in Java

    - by jjepsuomi
    I'm making a data prosessing system, which currently is using csv-data as input and output form. In the future I might want to add support for example database-, xml-, etc. typed input and output forms. How should I desing my program so that it would be easy to add support for new type of data sources? Should simply make for example an abstract data class (which would contain the basic file prosessing methods) and then inherit this class for database, xml, etc. cases? Hope my question is clear =) In other words my question is: "How to desing a file prosessing system, which can be easily updated to accept input data from different sources (database, XML, Excel, etc.)".

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • .NET Best Way to move many files to and from various directories??

    - by Dan
    I've created a program that moves files to and from various directories. An issue I've come across is when you're trying to move a file and some other program is still using it. And you get an error. Leaving it there isn't an option, so I can only think of having to keep trying to move it over and over again. This though slows the entire program down, so I create a new thread and let it deal with the problem file and move on to the next. The bigger problem is when you have too many of these problem files and the program now has so many threads trying to move these files, that it just crashes with some kernel.dll error. Here's a sample of the code I use to move the files: Public Sub MoveIt() Try File.Move(_FileName, _CopyToFileName) Catch ex As Exception Threading.Thread.Sleep(5000) MoveIt() End Try End Sub As you can see.. I try to move the file, and if it errors, I wait and move it again.. over and over again.. I've tried using FileInfo as well, but that crashes WAY sooner than just using the File object. So has anyone found a fool proof way of moving files without it ever erroring? Note: it takes a lot of files to make it crash. It'll be fine on the weekend, but by the end of the day on monday, it's done.

    Read the article

  • Need to have JProgress bar to measure progress when copying directories and files

    - by user1815823
    I have the below code to copy directories and files but not sure where to measure the progress. Can someone help as to where can I measure how much has been copied and show it in the JProgress bar public static void copy(File src, File dest) throws IOException{ if(src.isDirectory()){ if(!dest.exists()){ //checking whether destination directory exisits dest.mkdir(); System.out.println("Directory copied from " + src + " to " + dest); } String files[] = src.list(); for (String file : files) { File srcFile = new File(src, file); File destFile = new File(dest, file); copyFolder(srcFile,destFile); } }else{ InputStream in = new FileInputStream(src); OutputStream out = new FileOutputStream(dest); byte[] buffer = new byte[1024]; int length; while ((length = in.read(buffer)) > 0){ out.write(buffer, 0, length); } in.close(); out.close(); System.out.println("File copied from " + src + " to " + dest); }

    Read the article

  • Relative Path issue with .Net Windows Service..?

    - by Amitabh
    I have a windows service which is trying to access an xml file from the Application directory. Windows Service Installed directory : C:\Services\MyService\MyService.exe Path of the xml file : C:\Services\MyService\MyService.xml I am trying to access the file using the following code. using (FileStream stream = new FileStream("MyService.xml", FileMode.Open, FileAccess.Read)) { //Read file } I get the following error. "Can not find file : C:\WINDOWS\system\MyService.xml" My service is running with local system account and I don't want to use absolute path.

    Read the article

  • Preventing threads from writing to the same file

    - by EpsilonVector
    I'm implementing an FTP-like protocol in Linux kernel 2.4 (homework), and I was under the impression that if a file is open for writing any subsequent attempt to open it by another thread should fail, until I actually tried it and discovered it goes through. How do I prevent this from happening? PS: I'm using open() to open the file.

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Calling class in Java after editing file used in as source for table

    - by user2892290
    I'm currently working on a project, I'll try to subrscibe first. I save data into text file, that I use as a source for browser of that data. The browser is based on table that contains the data. I have to rewrite the source file everytime I delete or edit data. That's where the problem comes in. After deleting or editing data I call a method to create the table again, but the table never creates. Is it possibly made by editing the file and calling the method right after that? If I restart my app the table is successfully created with right data. Take in note that I don't get any error message. This is the method I use for loading data from source file: try (BufferedReader input1 = new BufferedReader(new FileReader("./src/data.src"))) { int lines = 0; while (input1.read() != -1) { if (!(input1.readLine()).equals("")) { lines++; } } input1.close(); if (lines == 0) { JOptionPane.showMessageDialog(null, "No data to load, create a note first!"); new Writer().build(frame); } else { try (BufferedReader input = new BufferedReader(new FileReader("./src/data.src"))) { Game[] g = new Game[lines]; String currentLine; String[] help; int counter = 0; while (lines > 0) { currentLine = input.readLine(); help = currentLine.split("#"); g[counter] = new Game(help[0],help[1], help[2], help[3], help[4], help[5], help[6], help[7], help[8], help[9]); counter++; lines--; } input.close(); final JButton bButton = new backButton().create(frame, mPanel); build(g, frame, bButton); mPanel.add(panel); mPanel.add(panel2); mPanel.add(searchPanel); mPanel.add(bButton); bButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { frame.setCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); panel.removeAll(); frame.setCursor(Cursor.getDefaultCursor()); } }); mPanel.setPreferredSize(new Dimension(1000, 750)); panel.setBorder(new EmptyBorder(10, 10, 10, 10)); frame.setLayout(new FlowLayout()); frame.add(mPanel); frame.pack(); JMenuBar menuBar = new Menu().create(frame, mPanel); frame.setJMenuBar(menuBar); frame.setVisible(true); Rectangle rec = GraphicsEnvironment.getLocalGraphicsEnvironment().getMaximumWindowBounds(); int width = (int) rec.getWidth(); int height = (int) rec.getHeight(); frame.setBounds(1, 3, width, height); frame.addComponentListener(new ComponentAdapter() { @Override public void componentMoved(ComponentEvent e) { frame.setLocation(1, 3); } }); And this is the method I use for creating the table: String[][] tableData = new String[g.length][9]; for (int i = 0; i < tableData.length; i++) { tableData[i][0] = g[i].getChampion(); tableData[i][1] = g[i].getRole(); tableData[i][2] = g[i].getEnemy(); tableData[i][3] = g[i].getDifficulty(); tableData[i][4] = g[i].getResult(); tableData[i][5] = g[i].getScore(); tableData[i][6] = g[i].getGameType(); tableData[i][7] = g[i].getPoints(); tableData[i][8] = g[i].getLeague(); } final JLabel searchLabel = new JLabel("Search for champion played."); final JButton searchButton = new JButton("Search"); final JTextField searchText = new JTextField(20); frame.setTitle("LoL Notepad - reading your notes"); JTable table = new JTable(tableData, columnNames); final JScrollPane scrollPane = new JScrollPane(table); scrollPane.setPreferredSize(new Dimension(980, 500)); panel2.setPreferredSize(new Dimension(1000, 550)); panel2.setVisible(false); panel2.setBorder(new EmptyBorder(10, 10, 10, 10)); panel3.setVisible(false); panel.setLayout(new FlowLayout()); panel.add(scrollPane); searchPanel.add(searchLabel); searchPanel.add(searchText); searchPanel.add(searchButton); searchButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { try { frame.setCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); search(g, searchText.getText(), frame, bButton); frame.setCursor(Cursor.getDefaultCursor()); } catch (IOException ex) { Logger.getLogger(Reader.class.getName()).log(Level.SEVERE, null, ex); } } }); table.addMouseListener(new MouseAdapter() { @Override public void mousePressed(MouseEvent e) { if (e.getClickCount() == 1) { JTable target = (JTable) e.getSource(); panel.setVisible(false); searchPanel.setVisible(false); bButton.setVisible(false); int row = target.getSelectedRow(); specific(row, g, frame, bButton); } } });

    Read the article

  • What is the fastest way to write hundreds of files to disk using C#?

    - by Ehsan
    My program should write hundreds of files to disk, received by external resources (network) each file is a simple document that I'm currently store it with the name of GUID in a specific folder but creating hundred files, writing, closing is a lengthy process. Is there any better way to store these amount of files to disk? I've come to a solution, but I don't know if it is the best. First, I create 2 files, one of them is like allocation table and the second one is a huge file storing all the content of my documents. But reading from this file would be a nightmare; maybe a memory-mapped file technique could help. Could working with 30GB or more create a problem?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >