Search Results

Search found 11825 results on 473 pages for 'live stream'.

Page 94/473 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Serialize WPF component using XamlWriter without default constructor

    - by mizipzor
    Ive found out that you can serialize a wpf component, in my example a FixedDocument, using the XamlWriter and a MemoryStream: FixedDocument doc = GetDocument(); MemoryStream stream = new MemoryStream(); XamlWriter.Save(doc, stream); And then to get it back: stream.Seek(0, SeekOrigin.Begin); FixedDocument result = (FixedDocument)XamlReader.Load(stream); return result; However, now I need to be able to serialize a DocumentPage as well. Which lacks a default constructor which makes the XamlReader.Load call throw an exception. Is there a way to serialize a wpf component without a default constructor?

    Read the article

  • Android RTSP coding problem

    - by NetApex
    I have Googled my butt off trying to find where if there is a surefire way to make rtsp work. I have a radio station that I listen to that streams via rtsp. Of course by default Android doesn't want to play it. If I pop the URL into yourmuze.fm and create a station there it lets me stream it to my phone. After checking how it works I come to find that it streams to the phone via rtsp! So obviously there is something amiss. What makes one stream work and one not? This is the stream I am attempting : rtsp://wms2.christiannetcast.com/yes-fm It is an audio stream so I would be thrilled with most peoples problem of "it only does audio and not video." When yourmuze.fm streams, DDMS states it brings up MovieView to play the audio if that helps at all.

    Read the article

  • How do i judge when the NetWorkStream finishes by using .net TcpClient to communicate

    - by Hwasin
    I try to use stream.DataAvailable to judge if it is finished,but sometimes the value is false but after a little while it is true again,i have to set a counter and judge the end by the symbol '' like this int connectCounter = 0; while (connectCounter < 1200) { if (stream.DataAvailable) { while (stream.DataAvailable) { byte[] buffer = new byte[bufferSize]; int flag = stream.Read(buffer, 0, buffer.Length); string strReadXML_t = System.Text.Encoding.Default.GetString(buffer); strReadXML = strReadXML + strReadXML_t.Replace("\0", string.Empty); } if (strReadXML.Substring(strReadXML.Length - 1, 1).Equals(">")) { break; } } Thread.Sleep(100); connectCounter++; } is there any good methord to deal with it?Thank you!

    Read the article

  • Encode audio to aac with libavcodec

    - by ryan
    I'm using libavcodec (latest git as of 3/3/10) to encode raw pcm to aac (libfaac support enabled). I do this by calling avcodec_encode_audio repeatedly with codec_context-frame_size samples each time. The first four calls return successfully, but the fifth call never returns. When I use gdb to break, the stack is corrupt. If I use audacity to export the pcm data to a .wav file, then I can use command-line ffmpeg to convert to aac without any issues, so I'm sure it's something I'm doing wrong. I've written a small test program that duplicates my problem. It reads the test data from a file, which is available here: http://birdie.protoven.com/audio.pcm (~2 seconds of signed 16 bit LE pcm) I can make it all work if I use FAAC directly, but the code would be a little cleaner if I could just use libavcodec, as I'm also encoding video, and writing both to an mp4. ffmpeg version info: FFmpeg version git-c280040, Copyright (c) 2000-2010 the FFmpeg developers built on Mar 3 2010 15:40:46 with gcc 4.4.1 configuration: --enable-libfaac --enable-gpl --enable-nonfree --enable-version3 --enable-postproc --enable-pthreads --enable-debug=3 --enable-shared libavutil 50.10. 0 / 50.10. 0 libavcodec 52.55. 0 / 52.55. 0 libavformat 52.54. 0 / 52.54. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 Is there something I'm not setting, or setting incorrectly in my codec context, maybe? Any help is greatly appreciated! Here is my test code: #include <stdio.h> #include <libavcodec/avcodec.h> void EncodeTest(int sampleRate, int channels, int audioBitrate, uint8_t *audioData, size_t audioSize) { AVCodecContext *audioCodec; AVCodec *codec; uint8_t *buf; int bufSize, frameBytes; avcodec_register_all(); //Set up audio encoder codec = avcodec_find_encoder(CODEC_ID_AAC); if (codec == NULL) return; audioCodec = avcodec_alloc_context(); audioCodec->bit_rate = audioBitrate; audioCodec->sample_fmt = SAMPLE_FMT_S16; audioCodec->sample_rate = sampleRate; audioCodec->channels = channels; audioCodec->profile = FF_PROFILE_AAC_MAIN; audioCodec->time_base = (AVRational){1, sampleRate}; audioCodec->codec_type = CODEC_TYPE_AUDIO; if (avcodec_open(audioCodec, codec) < 0) return; bufSize = FF_MIN_BUFFER_SIZE * 10; buf = (uint8_t *)malloc(bufSize); if (buf == NULL) return; frameBytes = audioCodec->frame_size * audioCodec->channels * 2; while (audioSize >= frameBytes) { int packetSize; packetSize = avcodec_encode_audio(audioCodec, buf, bufSize, (short *)audioData); printf("encoder returned %d bytes of data\n", packetSize); audioData += frameBytes; audioSize -= frameBytes; } } int main() { FILE *stream = fopen("audio.pcm", "rb"); size_t size; uint8_t *buf; if (stream == NULL) { printf("Unable to open file\n"); return 1; } fseek(stream, 0, SEEK_END); size = ftell(stream); fseek(stream, 0, SEEK_SET); buf = (uint8_t *)malloc(size); fread(buf, sizeof(uint8_t), size, stream); fclose(stream); EncodeTest(32000, 2, 448000, buf, size); }

    Read the article

  • Using MOA to classify new examples?

    - by Sam Zetoloth
    I'm trying to use the java machine learning library MOA to train on a training data stream, then predict classes for a test data stream. The first part works fine, using (for example) java -cp .:moa.jar:weka.jar -javaagent:sizeofag.jar moa.DoTask "LearnModel -l MajorityClass -s (ArffFileStream -f atrain.arff -c -1) -O amodel.moa" But then I cannot figure out how to use the trained model (amodel.moa) on another stream (atest.arff) to predict the classes. Has anyone done this before?

    Read the article

  • how to extract only the 1st 2 bytes of NSString in Objective-C for iPhone programming

    - by suse
    Hello, 1) How to read the data from read stream in Objective-C, Below code would give me how many bytes are read from stream, but how to know what data is read from stream? CFIndex cf = CFReadStreameRead(Stream, buffer, length); 2) How to extract only the 1st 2bytes of NSString in Objective-C in iPhone programming. For Ex: If this is the string NSString *str = 017MacApp; 1st byte has 0 in it, and 2nd byte has 17 in it. how do i extract o and 17 into byte array? I know that the below code would give me back the byte array to int value. ((b[0] & 0xFF) << 8)+ (b[1] & 0xFF); but how to put 0 into b[0] and 17 into b[1]? plz help me to solve this.

    Read the article

  • migrating an embedded jetty server from v6 to v7

    - by Ceilingfish
    Hi chaps, I have an embedded servlet which I use in unit tests, looks like this: public class UnitTestWebservices extends AbstractHandler { private Server server; private Map<Route,String> data = new HashMap<Route,String>(); public UnitTestWebservices(int port) throws Exception { server = new Server(port); server.setHandler(this); server.start(); } public void handle(String url, HttpServletRequest request, HttpServletResponse response, int arg3) throws IOException, ServletException { final Route route = Route.valueOf(request.getMethod(), url); final String content = data.get(route); if(content != null) { final ServletOutputStream stream = response.getOutputStream(); stream.print(content); stream.flush(); stream.close(); } else { response.sendError(HttpServletResponse.SC_NOT_FOUND); } } .... } That's written using version 6.1.24 of Jetty. I tried switching over to use Jetty 7.1.1.v20100517, and updated that code to this: public class UnitTestWebservices extends AbstractHandler { private Server server; private Map<Route,String> data = new HashMap<Route,String>(); public UnitTestWebservices(int port) throws Exception { server = new Server(port); server.setHandler(this); server.start(); } public void handle(String url, Request request, HttpServletRequest servletRequest, HttpServletResponse response) throws IOException, ServletException { final Route route = Route.valueOf(request.getMethod(), url); final String content = data.get(route); request.setHandled(true); response.setContentType("application/json"); if(content != null) { response.setStatus(HttpServletResponse.SC_OK); final Writer stream = response.getWriter(); stream.append(content); } else { response.sendError(HttpServletResponse.SC_NOT_FOUND); } } } But whenever I tried to access make a request to the server it would hang indefinitely. Has anyone experienced anything similar?. It also printed this into the log: log4j:WARN No appenders could be found for logger (org.eclipse.jetty.util.log). log4j:WARN Please initialize the log4j system properly. org.eclipse.jetty.server.Server@670655dd STOPPED +-UnitTestWebservices@50ef5502 started

    Read the article

  • no matching function for call to 'getline'

    - by WTP
    I have a class called parser: class parser { const std::istream& stream; public: parser(const std::istream& stream_) : stream(stream_) {} ~parser() {} void parse(); }; In parser::parse I want to loop over each line, so I use std::getline: std::getline(stream, line) The compiler gives me this error, however: src/parser.cc:10:7: error: no matching function for call to 'getline' std::getline(stream, line); ^~~~~~~~~~~~ But the first argument to std::getline is of type std::istream&, right? What could I be doing wrong?

    Read the article

  • LZMA for Delphi

    - by SaCi
    I got a LZMA library on 7-zip site, but that didn't worked. I'm not using files, just stream. And for some why the library on 7-zip site just write the header on the stream but not compress the stream. Some one faced the some problem ? Have some example ? Know other LZMA library for Delphi ? Tks

    Read the article

  • problem using base64 encoder and InputStreamReader

    - by karoberts
    I have some CLOB columns in a database that I need to put Base64 encoded binary files in. These files can be large, so I need to stream them, I can't read the whole thing in at once. I'm using org.apache.commons.codec.binary.Base64InputStream to do the encoding, and I'm running into a problem. My code is essentially this FileInputStream fis = new FileInputStream(file); Base64InputStream b64is = new Base64InputStream(fis, true, -1, null); InputStreamReader reader = new InputStreamReader(b64is); preparedStatement.setCharacterStream(1, reader); When I run the above code, I get one of these during the execution of the update java.io.IOException: Underlying input stream returned zero bytes, it is thrown deep in the InputStreamReader code. Why would this not work? It seems to me like the reader would attempt to read from the base 64 stream, which would read from the file stream, and everything should be happy.

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • c# regex split and extract multiple parts from a string

    - by nLL
    Hi, I am trying to extract some parts of the "Video:" line from below text. Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (300 00/1) - 14.93 (1000/67) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\a.3gp': Metadata: major_brand : 3gp5 minor_version : 0 compatible_brands: 3gp5isom Duration: 00:00:45.82, start: 0.000000, bitrate: 357 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb /s, 14.93 fps, 14.93 tbr, 90k tbn, 30k tbc Stream #0.1(und): Audio: aac, 16000 Hz, mono, s16, 11 kb/s Stream #0.2(und): Data: mp4s / 0x7334706D, 0 kb/s Stream #0.3(und): Data: mp4s / 0x7334706D, 0 kb/s* This is an output from ffmpeg command line where i can get Video: part with private string ExtractVideoFormat(string rawInfo) { string v = string.Empty; Regex re = new Regex("[V|v]ideo:.*", RegexOptions.Compiled); Match m = re.Match(rawInfo); if (m.Success) { v = m.Value; } return v; } and result is mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb What i am trying to do is to somehow split that line and get mpeg4 yuv420p 352x276 [PAR 1:1 DAR 88:69] 344 kb assigned to diffrent string objects instead of single

    Read the article

  • OutOfMemoryException when I read FileStream 500 MB

    - by Alhambra Eidos
    Hi all, I'm using Filestream for read big file ( 500 MB) and I get the OutOfMemoryException. Any solutions about it. My Code is: using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read)) { byte[] b2 = ReadFully(fs3, 1024); } public static byte[] ReadFully(Stream stream, int initialLength) { // If we've been passed an unhelpful initial length, just // use 32K. if (initialLength < 1) { initialLength = 32768; } byte[] buffer = new byte[initialLength]; int read = 0; int chunk; while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0) { read += chunk; // If we've reached the end of our buffer, check to see if there's // any more information if (read == buffer.Length) { int nextByte = stream.ReadByte(); // End of stream? If so, we're done if (nextByte == -1) { return buffer; } // Nope. Resize the buffer, put in the byte we've just // read, and continue byte[] newBuffer = new byte[buffer.Length * 2]; Array.Copy(buffer, newBuffer, buffer.Length); newBuffer[read] = (byte)nextByte; buffer = newBuffer; read++; } } // Buffer is now too big. Shrink it. byte[] ret = new byte[read]; Array.Copy(buffer, ret, read); return ret; } thanks in advanced,

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • Write wave files to memory in Java

    - by Cliff
    I'm trying to figure out why my servlet code creates wave files with improper headers. I use: AudioSystem.write( new AudioInputStream( new ByteArrayInputStream(memoryBytes), new AudioFormat(22000, 16, 1, true,false), memoryBytes.length ), AudioFileFormat.Type.WAVE, servletOutputStream ); taking a byte array from memory containing raw PCM samples and a servlet output stream that gets returned to the client. In the result I get a normal wave file but with zeros in the chunk size fields. Is the API broken? I would think that the size could be filled in using the size passed in the audio input stream. But now, after typing this out I'm thinking its not making this info available to the outer write() method on AudioSystem. It seems like the AudioSystem.write call needs a size parameter unless it is able to pull the size from the stream... which wouldn't work with an arbitrary sized stream. Does anyone know how to make this example work?

    Read the article

  • Reading and writing XML over an SslStream

    - by Mark
    I want to read and write XML data over an SslStream. The data (written and read) consists of objects serialized by an XmlSerializer. I have tried the following; (left some details out for clarity!) TcpClient tcpClient = new TcpClient(server, port); SslStream sslStream = new SslStream(tcpClient.GetStream(), true, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); sslStream.AuthenticateAsClient(server); XmlReader xmlReader = XmlReader.Create(sslStream,readerSettings); XmlWriter xmlWriter = XmlWriter.Create(sslStream,writerSettings); myClass c = new myClass (); XmlSerializer serializer = new XmlSerializer(typeof(myClass)); serializer.Serialize(xmlWriter, c); myClass c2 = (myClass )serializer.Deserialize(xmlReader); First of all, it appears that writing to the stream succeeds. But the XmlSerializer throws an error because of invalid XML. It appears that the first character read from the stream is '00' or a null-character. (I have googled this problem, and see a million people with the same 'problem', but no viable solution.) I can work around this 'issue' by using a StreamReader, read everything into a string and then use that string as input for another stream that the serializer can use. (Very dirty, but it works.) Second problem is, that when I try to use the same SslStream to write a second request, I do not get a response from the Reader. (The server DOES send a valid XML response though!) So the serializer.Serialize(xmlWriter, c3) works, but reading from the stream yields no results. I have tried several different classes that implement Stream. (StreamReader, XmlTextReader, etc.) Anyone has an idea how reading and writing XML data to and from an SslStream is supposed to work? Thanks in advance!

    Read the article

  • How to share memory buffer across sessions in Django?

    - by afriza
    I want to have one party (or more) sends a stream of data via HTTP request(s). Other parties will be able to receive the same stream of data in almost real-time. The data stream should be accessible across sessions (according to access control list). How can I do this in Django? If possible I would like to avoid database access and use in memory buffer (along with some synchronization mechanism)

    Read the article

  • Why is my GetNextChar() in my DecoderFallbackBuffer Specialization Repeatedly Getting Called?

    - by Canoehead
    I need to produce my own DecoderFallback and DecoderFallbackBuffer classes to implement some custom stream decoding. I have found that the stream reader making use of it is calling GetNextChar() repeatedly even when my specilizaed DecoderFallbackBuffer.Remaining property returns 0 to indicate that there no more characters to return. The end result is that the stream reader gets into an infinite loop. Why is this happening?

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • Reading HTTP server push streams with Python

    - by Sam
    I'm playing around trying to write a client for a site which provides data as an HTTP stream (aka HTTP server push). However, urllib2.urlopen() grabs the stream in its current state and then closes the connection. I tried skipping urllib2 and using httplib directly, but this seems to have the same behaviour. Is there a way to get the stream to stay open, so it can be checked each program loop for new contents, rather than waiting for the whole thing to be redownloaded every few seconds, introducing lag?

    Read the article

  • difference between DataContract attribute and Serializable attribute in .net

    - by samar
    I am trying to create a deep clone of an object using the following method. public static T DeepClone<T>(this T target) { using (MemoryStream stream = new MemoryStream()) { BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, target); stream.Position = 0; return (T)formatter.Deserialize(stream); } } This method requires an object which is Serialized i.e. an object of a class who is having an attribute "Serializable" on it. I have a class which is having attribute "DataContract" on it but the method is not working with this attribute. I think "DataContract" is also a type of serializer but maybe different than that of "Serializable". Can anyone please give me the difference between the two? Also please let me know if it is possible to create a deepclone of an object with just 1 attribute which does the work of both "DataContract" and "Serializable" attribute or maybe a different way of creating a deepclone? Please help!

    Read the article

  • Java IO (javase 6)- Help me understand the effects of my sample use of Streams and Writers...

    - by Daddy Warbox
    BufferedWriter out = new BufferedWriter( new OutputStreamWriter( new BufferedOutputStream( new FileOutputStream("out.txt") ) ) ); So let me see if I understand this: A byte output stream is opened for file "out.txt". It is then fed to a buffered output stream to make file operations faster. The buffered stream is fed to an output stream writer to bridge from bytes to characters. Finally, this writer is fed to a buffered writer... which adds another layer of buffering? Hmm...

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >