Search Results

Search found 299 results on 12 pages for 'buffering'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • CGI Buffering issue

    - by Punit
    I have a server side C based CGI code as: cgiFormFileSize("UPDATEFILE", &size); //UPDATEFILE = file being uploaded cgiFormFileName("UPDATEFILE", file_name, 1024); cgiFormFileContentType("UPDATEFILE", mime_type, 1024); buffer = malloc(sizeof(char) * size); if (cgiFormFileOpen("UPDATEFILE", &file) != cgiFormSuccess) { exit(1); } output = fopen("/tmp/cgi.tar.gz", "w+"); printf("The size of file is: %d bytes", size); inc = size/(1024*100); while (cgiFormFileRead(file, b, sizeof(b), &got_count) == cgiFormSuccess) { fwrite(b,sizeof(char),got_count,output); i++; if(i == inc && j<=100) { ***inc_pb*** = j; i = 0; j++; // j is the progress bar increment value } } cgiFormFileClose(file); retval = system("mkdir /tmp/update-tmp;\ cd /tmp/update-tmp;\ tar -xzf ../cgi.tar.gz;\ bash -c /tmp/update-tmp/update.sh"); However, this doesn't work the way as is seen above. Instead of printing 1,2,...100 to progress_bar.txt one by one it prints at ONE GO, seems it buffers and then writes to the file. fflush() also didn't work. Any clue/suggestion would be really appreciated.

    Read the article

  • SocketAsyncEventArgs and buffering while messages are in parts

    - by Rob
    C# socket server, which has roughly 200 - 500 active connections, each one constantly sending messages to our server. About 70% of the time the messages are handled fine (in the correct order etc), however in the other 30% of cases we have jumbled up messages and things get screwed up. We should note that some clients send data in unicode and others in ASCII, so that's handled as well. Messages sent to the server are a variable length string which end in a char3, it's the char3 that we break on, other than that we keep receiving data. Could anyone shed any light on our ProcessReceive code and see what could possibly be causing us issues and how we can solve this small issue (here's hoping it's a small issue!) Code below:

    Read the article

  • Force line-buffering of stdout when piping to tee

    - by houbysoft
    Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee. I have a C++ program, a, that outputs strings, always \n-terminated, to stdout. When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee. I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Why isn't my log4net appender buffering?

    - by Eric
    I've created a custom log4net appender. It descends from log4net.Appender.SmtpAppender which descends from log4net.Appender.BufferingAppenderSkeleton. I programatically setup the following parameters in its constructor: this.Lossy = false; //don't drop any messages this.BufferSize = 3; //buffer up to 3 messages this.Threshold = log4net.Core.Level.Error; //append messages of Error or higher this.Evaluator = new log4net.Core.LevelEvaluator(Level.Off); //don't flush the buffer for any message, regardless of level I expect this would buffer 3 events of level Error or higher and deliver those events when the buffer is filled. However, I'm finding that the events are not buffered at all; instead, SendBuffer() is called immediately every time an error is logged. Is there a mistake in my configuration? Thanks

    Read the article

  • BufferedReader no longer buffering after a while?

    - by BobTurbo
    Sorry I can't post code but I have a bufferedreader with 50000000 bytes set as the buffer size. It works as you would expect for half an hour, the HDD light flashing every two minutes or so, reading in the big chunk of data, and then going quiet again as the CPU processes it. But after about half an hour (this is a very big file), the HDD starts thrashing as if it is reading one byte at a time. It is still in the same loop and I think I checked free ram to rule out swapping (heap size is default). Probably won't get any helpful answers, but worth a try. OK I have changed heap size to 768mb and still nothing. There is plenty of free memory and java.exe is only using about 300mb. Now I have profiled it and heap stays at about 200MB, well below what is available. CPU stays at 50%. Yet the HDD starts thrashing like crazy. I have.. no idea. I am going to rewrite the whole thing in c#, that is my solution. Here is the code (it is just a throw-away script, not pretty): BufferedReader s = null; HashMap<String, Integer> allWords = new HashMap<String, Integer>(); HashSet<String> pageWords = new HashSet<String>(); long[] pageCount = new long[78592]; long pages = 0; Scanner wordFile = new Scanner(new BufferedReader(new FileReader("allWords.txt"))); while (wordFile.hasNext()) { allWords.put(wordFile.next(), Integer.parseInt(wordFile.next())); } s = new BufferedReader(new FileReader("wikipedia/enwiki-latest-pages-articles.xml"), 50000000); StringBuilder words = new StringBuilder(); String nextLine = null; while ((nextLine = s.readLine()) != null) { if (a.matcher(nextLine).matches()) { continue; } else if (b.matcher(nextLine).matches()) { continue; } else if (c.matcher(nextLine).matches()) { continue; } else if (d.matcher(nextLine).matches()) { nextLine = s.readLine(); if (e.matcher(nextLine).matches()) { if (f.matcher(s.readLine()).matches()) { pageWords.addAll(Arrays.asList(words.toString().toLowerCase().split("[^a-zA-Z]"))); words.setLength(0); pages++; for (String word : pageWords) { if (allWords.containsKey(word)) { pageCount[allWords.get(word)]++; } else if (!word.isEmpty() && allWords.containsKey(word.substring(0, word.length() - 1))) { pageCount[allWords.get(word.substring(0, word.length() - 1))]++; } } pageWords.clear(); } } } else if (g.matcher(nextLine).matches()) { continue; } words.append(nextLine); words.append(" "); }

    Read the article

  • What is faster: multiple `send`s or using buffering?

    - by dauerbaustelle
    I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)

    Read the article

  • What about buffering FileInputStream?

    - by Pregzt
    I have a piece of code that reads hell of a lot (hundreds of thousand) of relatively small files (couple of KB) from the local file system in a loop. For each file there is a java.io.FileInputStream created to read the content. The process its very slow and take ages. Do you think that wrapping the FIS into java.io.BufferedInputStream would make a significant difference?

    Read the article

  • Is there a way to set up a Linux pipe to non-buffering or line-buffering?

    - by ern0
    My program is controlling an external application on Linux, passing in input commands via a pipe to the external applications stdin, and reading output result via a pipe from the external applications stdout. The problem is that writes to pipes are buffered by block, and not by line, and therefore delays occur before my app receives data output by the external application. The external application cannot be altered to add explicit fflush() calls. When I set the external application to /bin/cat -n (it echoes back the input, with line numbers added), it works correctly, it seems, cat flushes after each line. The only way to force the external application to flush, is sending exit command to it; as it receives the command, it flushes, and all the answers appears on the stdout, just before exiting. I'm pretty sure, that Unix pipes are appropiate solution for that kind of interprocess communication (pseudo server-client), but maybe I'm wrong. (I've just copied some text from a similar question: Force another program's standard output to be unbuffered using Python)

    Read the article

  • Java Applet Buffering images

    - by Dan
    OK so here's my code: http://www.so.pastebin.com/Qca4ERmy I am trying to use buffers so the applet won't flicker upon redraw() but it seems I am having trouble. The applet still flickers.... Help? Thank you.

    Read the article

  • Why save output until the end?

    - by user509006
    Very quick question about programming practices here: I've always used echo() to output HTML code to the user as soon as it was generated, and used ob_start() at the same time to be able to output headers later in the code. Recently, I was made aware that this is bad programming practice and I should be saving HTML output until the end. Is there a reason for this? What is it, and why isn't output buffering a good alternative? Thanks!

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • How can you buffer two online videos alternately while playing one of them?

    - by Rajats1234
    On my website, I want my user to be able to launch video 2 at any point in the middle of video 1 without waiting or refreshing the window. How can I buffer the two videos such that I buffer video 1 enough to let it start playing, and then buffer just enough of video 2 that if the user launches it at any point, he does not have to wait to view its first few seconds, and then I can come back and buffer the rest of video 1? If this is not possible, I could also look at buffering video 1 and 2 in parallel while playing only video 1. Thanks!

    Read the article

  • stream output of a command continuously run in php

    - by chandan kharbanda
    i am running this simple command { ./a.out < in; } & output.txt using exec and i want to output what's coming in output.txt through ajax request continuously. Also i want a way to turn off my output buffering.I tried flush() and ob functions but didn't succedded.In my /etc/php5/apache2/php.ini php.ini value of output_buffering = 4096.When I change it to Off and restart my apache2 server it fails to restart.P.S. I am using lamp server.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • Creating a shim Stream

    - by spender
    A decompression API that I am using has the following API: Decode(Stream inStream,Stream outStream) I'd like to create a wrapper around this API, such that I can create my own Stream class which offers up the decoded data. Stream decodedStream=new BlaDecodeStream(inStream); So that I can than use this stream as a parameter to the XmlReader constructor in the same way one might use the System.IO.Compression.GZipStream. As far as I can tell, the only other option is set outStream stream to a MemoryStream or to a FileStream and go in two hops. The files I am dealing with are enormous, so neither of these options are particularly attractive. Before I go reinventing the wheel, is there any prior art that I might be able to draw from, or something in the BCL I might have missed? The CircularStream implementation here would go some of the way to helping, but I'm really looking for something similar that would block (as opposed to over/underrun) when the Stream's internal buffer is 'empty' when reading from it and block when the internal buffer is full when writing to it. In this way it could serve as parameter outStream and simultaneously (i.e. from another thread) could be read from by the XmlReader.

    Read the article

  • Is there a way to make PHP progressively output as the script executes?

    - by Iain Fraser
    So I'm writing a disposable script for my own personal single use and I want to be able see how the process is going. Basically I'm processing a couple of thousand media releases and sending them to our new CMS. So I don't hammer the CMS, I'm making the script sleep for a couple of seconds after every 5 requests. I would like - as the script is executing - to be able to see my echos telling me the script is going to sleep or that the last transaction with the webservice was successful. Is this possible in PHP? Thanks for your help! Iain

    Read the article

  • Flash computeSpectrum() unsynchronized with audio

    - by sold
    I am using Flash's (CS4, AS3) SoundMixer.computeSpectrum to visualize a DFT of what supposed to be, according to the docs, whatever is currently being played. However, there is a considerable delay between the audio and the visualization (audio comes later). It seems that computeSpectrum captures whatever is on it's way to the buffer, and not to the speakers. Any cure for this?

    Read the article

  • insert in the head tag after wp_head

    - by Umair
    Using wordpress 2.9.2. I am loading my css files for plugins as they are being called, which ofcourse is after my header has been loaded. I want to insert the calls those css files in the HEAD tag of the page, which can be done if i get a hook which includes lines in the head after wp_head() has been called. Help !

    Read the article

  • bufferTime on OSMF

    - by Ronnie
    I am having an issue with OSMF. I have an MP4 that is 40MB. It is being progressively loaded. My issue is the video wont begin playing until the video has fully loaded. I am testing this on a web server. Any idea what's going on or what I am not doing? var mps:MediaPlayerSprite = new MediaPlayerSprite(); mps.x = 159; mps.y = 53; mps.width = 512; mps.height = 288; mps.resource = new URLResource("resources/video/3.4.1.mp4"); addChild(mps); I've even tried adding mps.mediaPlayer.bufferTime = 1; but no luck. I've even tried 0.

    Read the article

  • emacs/Python: running python-shell in line buffered vs. block buffered mode

    - by Begbie00
    Hi all - In a related question and answer here, someone hypothesized that python-shell within emacs(23.2) was block-buffered instead of line-buffered. The recommended fix was to add sys.stdout.flush() to the spot in my script where I want stdio to flush its contents to the python-shell. Is there someway to trick python-shell (running in emacs 23.2 on Windows, not Linux) into either a) thinking it's attached to a TTY or b) using line-buffered instead of block-buffered mode? I don't see why I'd be able to do this in IDLE but not emacs. I'd rather customize emacs than add sys.stdout.flush() throughout my scripts. Call me lazy :-). Thanks, Mike

    Read the article

  • How to buffer stdout in memory and write it from a dedicated thread

    - by NickB
    I have a C application with many worker threads. It is essential that these do not block so where the worker threads need to write to a file on disk, I have them write to a circular buffer in memory, and then have a dedicated thread for writing that buffer to disk. The worker threads do not block any more. The dedicated thread can safely block while writing to disk without affecting the worker threads (it does not hold a lock while writing to disk). My memory buffer is tuned to be sufficiently large that the writer thread can keep up. This all works great. My question is, how do I implement something similar for stdout? I could macro printf() to write into a memory buffer, but I don't have control over all the code that might write to stdout (some of it is in third-party libraries). Thoughts? NickB

    Read the article

  • Buffer size: N*sizeof(type) or sizeof(var)? C++

    - by flyout
    I am just starting with cpp and I've been following different examples to learn from them, and I see that buffer size is set in different ways, for example: char buffer[255]; StringCchPrintf(buffer, sizeof(buffer), TEXT("%s"), X); VS char buffer[255]; StringCchPrintf(buffer, 255*sizeof(char), TEXT("%s"), X); Which one is the correct way to use it? I've seen this in other functions like InternetReadFile, ZeroMemory and MultiByteToWideChar.

    Read the article

  • how to use double buffering in awt? [on hold]

    - by Ishanth
    import java.awt.event.*; import java.awt.*; class circle1 extends Frame implements KeyListener { public int a=300; public int b=70; public int pacx=360; public int pacy=270; public circle1() { setTitle("circle"); addKeyListener(this); repaint(); } public void paint(Graphics g) { g.fillArc (a, b, 60, 60,pacx,pacy); } public void keyPressed(KeyEvent e) { int key=e.getKeyCode(); System.out.println(key); if(key==38) { b=b-5; //move pacman up pacx=135;pacy=270; //packman mouth upside if(b==75&&a>=20||b==75&&a<=945) { b=b+5; } else { repaint(); } } else if(key==40) { b=b+5; //move pacman downside pacx=315; pacy=270; //packman mouth down if(b==645&&a>=20||b==645&&a<=940) { b=b-5; } else{ repaint(); } } else if(key==37) { a=a-5; //move pacman leftside pacx=227; pacy=270; //packman mouth left if(a==15&&b>=75||a==15&&b<=640) { a=a+5; } else { repaint(); } } else if(key==39) { a=a+5; //move pacman rightside pacx=42;pacy=270; //packman mouth right if(a==945&&a>=80||a==945&&b<=640) { a=a-5; } else { repaint(); } } } public void keyReleased(KeyEvent e){} public void keyTyped(KeyEvent e){} public static void main(String args[]) { circle1 c=new circle1(); c.setVisible(true); c.setSize(400,400); } }

    Read the article

  • how to create a progress bar using php output buffering and jquery?

    - by avien
    how to create a progress bar using php output buffering and jquery? i been searching this for weeks, i am desperate to learn this, this is a very big help for me, is someone the here share some codes? i know the this will need to script. i have tried sereval code for my client side script: var request = new XMLHttpRequest(); request.addEventListener("progress", updateProgress, false); function updateProgress(e) { var percent = (e.loaded / e.total) * 100; /** update the with of the progress bar **/ } and to my server side, i dont know how. but actually i want to use this for a mysql query so that i can see the progress of my query and another thing is i dont know how to use output buffering. somebody help me on this please.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >