Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 53/267 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • shell_exec() Doesn't Show The Output

    - by Nathan Campos
    I'm doing a PHP site that uses a shell_exec() function like this: $file = "upload/" . $_FILES["file"]["name"]; $output = shell_exec("leaf $file"); echo "<pre>$output</pre>"; Where leaf is a program that is located in the same directory of my script, but when I tried to run this script on the server, I just got nothing. What is wrong?

    Read the article

  • Second Thread Holding Up Entire Program in C# Windows Form Application

    - by Brandon
    In my windows form application, I'm trying to test the user's ability to access a remote machine's shared folder. The way I'm doing this (and I'm sure that there are better ways...but I don't know of them) is to check for the existence of a specific directory on the remote machine (I'm doing this because of firewall/other security restrictions that I'm confronted with in my organization). If the user has rights to access the shared folder, then it returns in no time at all, but if they don't, it hangs forever. To solve this, I threw the check into another thread and wait only 1000 milliseconds before determining that the share can't be hit by the user. However, when I do this, it still hangs as if it was never run in the same thread. What is making it hang and how do I fix it? I would think that the fact that it is in a separate thread would allow me to just let the thread finish on it's own in the background. Here is my code: bool canHitInstallPath = false; Thread thread = new Thread(new ThreadStart(() => { canHitInstallPath = Directory.Exists(compInfo.InstallPath); })); thread.Start(); thread.Join(1000); if (canHitInstallPath == false) { throw new Exception("Cannot hit folder: " + compInfo.InstallPath); }

    Read the article

  • Online Image Slideshow Question. File Access Problems.

    - by msandbot
    Hi, I have a flash .swf file that I embed on my webpage. On my server I have the .swf file and multiple image folders. I would like to load every file in one of those folders into the flash slideshow. How should I go about doing this? I tried used Air but it doesn't work on my system as an application so I doubt it will work online. Eventually I plan on making a menu where you can select different folders to display and since they are of different sizes, a foreach loop would be optimal. Keeping a txt file with the number of images is also possible if theres a way to read that in, but I would prefer the more dynamic approach. I am working towards using php for the website if that helps find a solution. Thanks, -Mike Also my slideshow works great online currently but i have to hardcode in the number of files.

    Read the article

  • java.util.zip - ZipInputStream v.s. ZipFile

    - by lucho
    Hello, community! I have some general questions regarding the java.util.zip library. What we basically do is an import and an export of many small components. Previously these components were imported and exported using a single big file, e.g.: <component-type-a id="1"/> <component-type-a id="2"/> <component-type-a id="N"/> <component-type-b id="1"/> <component-type-b id="2"/> <component-type-b id="N"/> Please note that the order of the components during import is relevant. Now every component should occupy its own file which should be externally versioned, QA-ed, bla, bla. We decided that the output of our export should be a zip file (with all these files in) and the input of our import should be a similar zip file. We do not want to explode the zip in our system. We do not want opening separate streams for each of the small files. My current questions: Q1. May the ZipInputStream guarantee that the zip entries (the little files) will be read in the same order in which they were inserted by our export that uses ZipOutputStream? I assume reading is something like: ZipInputStream zis = new ZipInputStream(new BufferedInputStream(fis)); ZipEntry entry; while((entry = zis.getNextEntry()) != null) { //read from zis until available } I know that the central zip directory is put at the end of the zip file but nevertheless the file entries inside have sequential order. I also know that relying on the order is an ugly idea but I just want to have all the facts in mind. Q2. If I use ZipFile (which I prefer) what is the performance impact of calling getInputStream() hundreds of times? Will it be much slower than the ZipInputStream solution? The zip is opened only once and ZipFile is backed by RandomAccessFile - is this correct? I assume reading is something like: ZipFile zipfile = new ZipFile(argv[0]); Enumeration e = zipfile.entries();//TODO: assure the order of the entries while(e.hasMoreElements()) { entry = (ZipEntry) e.nextElement(); is = zipfile.getInputStream(entry)); } Q3. Are the input streams retrieved from the same ZipFile thread safe (e.g. may I read different entries in different threads simultaneously)? Any performance penalties? Thanks for your answers!

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Is there a Base64Stream for .NET? where?

    - by Cheeso
    If I want to produce a Base64-encoded output, how would I do that in .NET? I know that since .NET 2.0, there is the ICryptoTransform interface, and the ToBase64Transform() and FromBase64Transform() implementations of that interface. But those classes are embedded into the System.Security namespace, and require the use of a TransformBlock, TransformFinalBlock, and so on. Is there an easier way to base64 encode a stream of data in .NET?

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • How can I find file system concurrency issues ?

    - by krosenvold
    I have an application running on Linux, and I find myself wanting windows (!). The problem is that every 1000 times or so I run into concurrency problems that are consistent with concurrent reading/writing of files. I am fairly sure that this behavior would be prohibited by file locking under Windows, but I don't have any sufficiently fast windows box to check. There is simply too much file access (too much data) to expect strace to work reliably - the sheer volume of output is likely to change the problem too. It also happens on different files every time. Ideally I would like to change/reconfigure the linux file system to be more restrictive (as in fail-fast) wrt concurrent access. Are there any tools/settings I can use to achieve this ?

    Read the article

  • Determine which process (b)locks a file, programmatically (under Windows >= XP)

    - by fred-hh
    How to programmatically determine from a process P, which other process P' has a lock on a file, that prevents P from recreating that file ? I know there are tools to do that, but how do they achieve that ? (Context: a batch program that runs overnight fails because of a locked file. Running an admin tool the next day may be too late to get useful information. So it would be nice if the batch program itself was able to determine the culprit.) EDIT: Added complexity: the file resides on a DFS and P' might not run on the same machine as P (but maybe does). But a solution that works locally would be a good beginning.

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Delete temp file during finally vs delete output file during catch

    - by Russell
    This is in Java 6. I've seen more than once that people create temp files, do something, then rename it to the output file. Everything is wrapped in a try-finally block, where the temp file is deleted in finally in case something goes wrong in between. try { //do something with tempFile //do something with tempFile //do something with tempFile tempFile.renameTo(outputFile); } finally { if (tempFile.exists()) tempFile.delete() } I was wondering what are the benefits of doing that instead of doing something to the output file directly and delete it in case of exceptions. try { //do something with outputFile //do something with outputFile //do something with outputFile } catch (Exception e) { if (outputFile.exists()) outputFile.delete(); } My guess is that deleting temp files in finally benefits me when the try block can throw many kinds of exceptions. Is my guess right? What else?

    Read the article

  • Write physical table to csv file

    - by urema
    Hi, I was wondering if anyone knows how to write an actual table/grid to a csv file....i dont mean the content of the table/grid, i mean the actual grid lines etc etc, headers, axis..... Thanks greatly in advance. U.

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Android USB Host Communication

    - by Kip Russell
    I'm working on a project that utilizes the USB Host capabilities in Android 3.2. I'm suffering from a deplorable lack of knowledge and talent regarding USB/Serial communication in general. I'm also unable to find any good example code for what I need to do. I need to read from a USB Communication Device. Ex: When I connect via Putty (on my PC) I enter: >GO And the device starts spewing out data for me. Pitch/Roll/Temp/Checksum. Ex: $R1.217P-0.986T26.3*60 $R1.217P-0.986T26.3*60 $R1.217P-0.987T26.3*61 $R1.217P-0.986T26.3*60 $R1.217P-0.985T26.3*63 I can send the initial 'GO' command from the Android device at which time I receive an echo of 'GO'. Then nothing else on any subsequent reads. How can I: 1) Send the 'go' command. 2) Read in the stream of data that results. The USB device I'm working with has the following interfaces (endpoints). Device Class: Communication Device (0x2) Interfaces: Interface #0 Class: Communication Device (0x2) Endpoint #0 Direction: Inbound (0x80) Type: Intrrupt (0x3) Poll Interval: 255 Max Packet Size: 32 Attributes: 000000011 Interface #1 Class: Communication Device Class (CDC) (0xa) Endpoint #0 Address: 129 Number: 1 Direction: Inbound (0x80) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 Endpoint #1 Address: 2 Number: 2 Direction: Outbound (0x0) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 I'm able to deal with permission, connect to the device, find the correct interface and assign the endpoints. I'm just having trouble figuring out which technique to use to send the initial command read the ensuing data. I'm tried different combinations of bulkTransfer and controlTransfer with no luck. Thanks. I'm using interface#1 as seen below: public AcmDevice(UsbDeviceConnection usbDeviceConnection, UsbInterface usbInterface) { Preconditions.checkState(usbDeviceConnection.claimInterface(usbInterface, true)); this.usbDeviceConnection = usbDeviceConnection; UsbEndpoint epOut = null; UsbEndpoint epIn = null; // look for our bulk endpoints for (int i = 0; i < usbInterface.getEndpointCount(); i++) { UsbEndpoint ep = usbInterface.getEndpoint(i); Log.d(TAG, "EP " + i + ": " + ep.getType()); if (ep.getType() == UsbConstants.USB_ENDPOINT_XFER_BULK) { if (ep.getDirection() == UsbConstants.USB_DIR_OUT) { epOut = ep; } else if (ep.getDirection() == UsbConstants.USB_DIR_IN) { epIn = ep; } } } if (epOut == null || epIn == null) { throw new IllegalArgumentException("Not all endpoints found."); } AcmReader acmReader = new AcmReader(usbDeviceConnection, epIn); AcmWriter acmWriter = new AcmWriter(usbDeviceConnection, epOut); reader = new BufferedReader(acmReader); writer = new BufferedWriter(acmWriter); }

    Read the article

  • What's the correct way to do a "catch all" error check on an fstream output operation?

    - by Truncheon
    What's the correct way to check for a general error when sending data to an fstream? UPDATE: My main concern regards some things I've been hearing about a delay between output and any data being physically written to the hard disk. My assumption was that the command "save_file_obj << save_str" would only send data to some kind of buffer and that the following check "if (save_file_obj.bad())" would not be any use in determining if there was an OS or hardware problem. I just wanted to know what was the definitive "catch all" way to send a string to a file and check to make certain that it was written to the disk, before carrying out any following actions such as closing the program. I have the following code... int Saver::output() { save_file_handle.open(file_name.c_str()); if (save_file_handle.is_open()) { save_file_handle << save_str.c_str(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } save_file_handle.close(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } return 1; } else { x_message("Error - couldn't open save file"); return 0; } }

    Read the article

  • Writing the output of IEnumerable<XElement> to a XML File

    - by Googler
    HI folks , This is my code to read two xml files and merge their elemnts. I want to write the output to an xml file. This is my code. IEnumerable<XElement> list0 = doc.Descendants(node1).Concat(doc2.Descendants(node2)); foreach (XElement el in list0) Console.WriteLine(el); Instead of writing to the console i need to write it to a xml file. Output is also in the xml format.How to achieve this. Can anyone pls give me a code or method to achiev this.

    Read the article

  • Why does DataInputStream not support integers?

    - by Jason
    I need to read in a list of numbers from a file, none of which are larger than 32767. Originally I was going to use the Scanner class to pull in the data, then I read about DataInputStream. This would work well for me, except that according to the API, it supports all primitive variables EXCEPT ints! Listed are longs, shorts, bytes, chars, booleans, ect, but no ints. I have no need for double precision from the incoming data. Is this a deliberate or unintentional oversight?

    Read the article

  • How do I turn an array of bytes back into a file and open it automatically with C#?

    - by Ace Grace
    Hi, I am writing some code to add file attachments into an application I am building. I have add & Remove working but I don't know where to start to implement open. I have an array of bytes (from a table field) and I don't know how to make it automatically open e.g. If I have an array of bytes which is a PDF, how do I get my app to automatically open Acrobat or whatever the currently assigned application for the extension is using C#?

    Read the article

  • Write a file in UTF-8 using FileWriter (Java)?

    - by user1280970
    I have the following code however, I want it to write as a UTF-8 file to handle foreign characters. Is there a way of doing this, is there some need to have a parameter? I would really appreciate your help with this. Thanks. try { BufferedReader reader = new BufferedReader(new FileReader("C:/Users/Jess/My Documents/actresses.list")); writer = new BufferedWriter(new FileWriter("C:/Users/Jess/My Documents/actressesFormatted.csv")); while( (line = reader.readLine()) != null) { //If the line starts with a tab then we just want to add a movie //using the current actor's name. if(line.length() == 0) continue; else if(line.charAt(0) == '\t') { readMovieLine2(0, line, surname.toString(), forename.toString()); } //Else we've reached a new actor else { readActorName(line); } } } catch (IOException e) { e.printStackTrace(); } }

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • How do make my encryption algorithm encrypt more than 128 bits?

    - by Ranhiru
    OK, now I have coded for an implementation of AES-128 :) It is working fine. It takes in 128 bits, encrypts and returns 128 bits So how do i enhance my function so that it can handle more than 128 bits? How do i make the encryption algorithm handle larger strings? Can the same algorithm be used to encrypt files? :) The function definition is public byte[] Cipher(byte[] input) { }

    Read the article

  • create a sparse BufferedImage in java

    - by elgcom
    I have to create an image with very large resolution, but the image is relatively "sparse", only some areas in the image need to draw. For example with following code /* this take 5GB memory */ final BufferedImage img = new BufferedImage( 36000, 36000, BufferedImage.TYPE_INT_ARGB); /* draw something */ Graphics g = img.getGraphics(); g.drawImage(....); /* output as PNG */ final File out = new File("out.png"); ImageIO.write(img, "png", out); The PNG image on the end I created is ONLY about 200~300 MB. The question is how can I avoid creating a 5GB BufferedImage at the beginning? I do need an image with large dimension, but with very sparse color information. Is there any Stream for BufferedImage so that it will not take so much memory?

    Read the article

  • Intercept windows open file

    - by HyLian
    Hello, I'm trying to make a small program that could intercept the open process of a file. The purpose is when an user double-click on a file in a given folder, windows would inform to the software, then it process that petition and return windows the data of the file. Maybe there would be another solution like monitoring Open messages and force Windows to wait while the program prepare the contents of the file. One application of this concept, could be to manage desencryption of a file in a transparent way to the user. In this context, the encrypted file would be on the disk and when the user open it ( with double-click on it or with some application such as notepad ), the background process would intercept that open event, desencrypt the file and give the contents of that file to the asking application. It's a little bit strange concept, it could be like "Man In The Middle" network concept, but with files instead of network packets. Thanks for reading.

    Read the article

  • Create a Stream without having a physical file to create from.

    - by jhorton
    I'm needing to create a zip file containing documents that exist on the server. I am using the .Net Package class to do so, and to create a new Package (which is the zip file) I have to have either a path to a physical file or a stream. I am trying to not create an actual file that would be the zip file, instead just create a stream that would exist in memory or something. My question is how do you instantiate a new Stream (i.e. FileStream, MemoryStream, etc) without having a physical file to instantiate from.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >