Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 98/126 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Asp.Net Error Message:Unable to validate data

    - by Amitabh
    We have a Asp.Net Webform page which contains a GridView inside UpdatePanel and refreshes every minute. And every one minute we get the following error in Event log. Error Message:Unable to validate data. Stack Trace: at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString). We have tried the following. Adding a static machine key in the Web.Config. (Did not work?) Disabling the View State Mac in the Web,.Config using following entry. (Did not work) <pages buffer="true" enableViewStateMac="false"> Is there something else that might cause this?

    Read the article

  • Increase the TCP receive window for a specific socket

    - by rursw1
    Hi, How to increase the TCP receive window for a specific socket? - I know how to do so for all the sockets by setting the registry key TcpWindowSize, but how do do that for a specific one? According to MSFT's documents, the way is Calling the Windows Sockets function setsockopt, which sets the receive window on a per-socket basis. But in setsockopt, it is mentioned about SO_RCVBUF : Specifies the total per-socket buffer space reserved for receives. This is unrelated to SO_MAX_MSG_SIZE and does not necessarily correspond to the size of the TCP receive window. So is it possible? How? Thanks.

    Read the article

  • Write to pipe deadlocking program

    - by avs3323
    Hi, I am having a problem in my program that uses pipes. What I am doing is using pipes along with fork/exec to send data to another process What I have is something like this: //pipes are created up here if(fork() == 0) //child process { ... execlp(...); } else { ... fprintf(stderr, "Writing to pipe now\n"); write(pipe, buffer, BUFFER_SIZE); fprintf(stderr, "Wrote to pipe!"); ... } This works fine for most messages, but when the message is very large, the write into the pipe deadlocks. I think the pipe might be full, but I do not know how to clear it. I tried using fsync but that didn't work. Can anyone help me?

    Read the article

  • Using .dll methods to load data from file in C# code

    - by Espinas.iss
    I want to use in C# these methods: * int LibRaw::open_datastream(LibRaw_abstract_datastream *stream) * int LibRaw::open_file(const char *rawfile) * int LibRaw::open_buffer(void *buffer, size_t bufsize) * int LibRaw::unpack(void) * int LibRaw::unpack_thumb(void) that are stored in a libraw.dll. These functions one by one load data from file... I've been reading about P/Invoke but i'm not sure how to invoke them. Can anyone show me an example how to use all of these functions together in C# to load file (raw image stored in folder) or just how to PIvoke one of them. Thanx!

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • Texture repeats even with GL_CLAMP_TO_EDGE set [FIXED]

    - by Lliane
    Hi, i'm trying to put a translucing texture on a face which uses points 1 to 4 (don't mind the numbers) on the following screenshot Sadly as you can see the texture repeats herself in both dimensions, I tried to switch the TEXTURE_WRAP_S from REPEAT to CLAMP_to_EDGE but it doesn't change anything. Texture loading code is here : gl.glBindTexture(gl.GL_TEXTURE_2D, mTexture.get(4)); gl.glActiveTexture(4); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR); gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA, shadowbmp.width, shadowbmp.height, 0, gl.GL_RGBA, gl.GL_UNSIGNED_SHORT_4_4_4_4, shadowbmp.buffer); Texture coordinates are the following : float shadow_bot_text[] = { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; Thanks

    Read the article

  • ASP.net web page still displaying cached versions

    - by user279521
    My web page is still displaying a previously cached versions of the page. I have this in the page_load event: Response.Clear(); Response.Buffer = true; Response.ExpiresAbsolute = DateTime.Now.AddDays(-1d); Response.Expires = -1; Response.CacheControl = "no-cache"; Response.Cache.SetCacheability(HttpCacheability.NoCache); I have this in the Page_Init: protected void Page_Init(object Sender, EventArgs e) { Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetExpires(DateTime.Now.AddDays(-1)); } Any idea what I might be missing?

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

  • check whether fgets would block

    - by lv
    Hi, I was just wondering whether in C is it possible to peek in the input buffer or perform similar trickery to know whether a call to fgets would block at a later time. Java allows to do something like that by calling BufferedReader.ready(), this way I can implement console input something like this: while (on && in.ready()) { line = in.readLine(); /* do something with line */ if (!in.ready()) Thread.sleep(100); } this allows an external thread to gracefully shutdown the input loop by setting on to false; I'd like to perform a similar implementation in C without resorting to non portable tricks, I already know I can make a "timed out fgets" under unix by resorting to signals or (better, even though requering to take care of buffering) reimplement it on top of recv/select, but I'd prefer something that would work on windows too. TIA

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • Android image scaling to support multiple resolutions

    - by tyuo9980
    I've coded my game for 320x480 and I figure that the easiest way to support multiple resolutions is to scale the end image. What are your thoughts on this? Would it be cpu efficient to do it this way? I have all my images placed in the mdpi folder, I'll have it drawn unscaled on the screen onto a buffer, then scale it to fit the screen. all the user inputs will be scaled as well. I have these 2 questions: -How do you draw a bitmap without android automatically scaling it -How do you scale a bitmap?

    Read the article

  • C++: Avoid .cpp files with only an empty (de)constructor

    - by Martijn Courteaux
    Hi, When I have a header file like this: #ifndef GAMEVIEW_H_ #define GAMEVIEW_H_ #include <SDL/SDL.h> class GameView { public: GameView(); virtual ~GameView(); virtual void Update() = 0; virtual void Render(SDL_Surface* buffer) = 0; }; #endif /* GAMEVIEW_H_ */ I need to create a .cpp file like this: #include "GameView.h" GameView::~GameView() { } GameView::GameView() { } This is a bit stupid. Just a .cpp file for an empty constructor and deconstructor. I want to implement that method simply in the header file. That is much cleaner. How to do this?

    Read the article

  • How do I bind a key to "the function represented by the following key sequence"?

    - by katrielalex
    I'm just starting to learn emacs (woohoo!) and I've been mucking around in my .emacs quite happily. Unfortunately, I don't know Lisp yet, so I'm having issues with the basics. I've already remapped a few keys until I fix my muscle memory: (global-set-key (kbd "<f9>") 'recompile) That's fine. But how can I tell a key to 'simulate pressing several keys'? For instance, I don't know, make <f1> do the same as C-u 2 C-x } (widen buffer by two chars). One way is to look up that C-x } calls shrink-window-horizontally, and do some sort of lambda thing. This is of course the neat and elegant way (how do you do this?). But surely there's a way to define <f1> to send the keystrokes C-u 2 C-x }?

    Read the article

  • Buffering db inserts in multithreaded program

    - by Winter
    I have a system which breaks a large taks into small tasks using about 30 threads as a time. As each individual thread finishes it persists its calculated results to the database. What I want to achieve is to have each thread pass its results to a new persisance class that will perform a type of double buffering and data persistance while running in its own thread. For example, after 100 threads have moved their data to the buffer the persistance class then the persistance class swaps the buffers and persists all 100 entries to the database. This would allow utilization of prepared statements and thus cut way down on the I/O between the program and the database. Is there a pattern or good example of this type of multithreading double buffering?

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

  • How to run shell script with live feedback from PHP?

    - by Highway of Life
    How would I execute a shell script from PHP while giving constant/live feedback to the browser? I understand from the system function documentation: The system() call also tries to automatically flush the web server's output buffer after each line of output if PHP is running as a server module. I'm not clear on what they mean by running it as a 'server module'. I attempted to run the script in the cgi-bin, but either I'm doing it wrong, or that's not what they mean. Example PHP code: <?php system('/var/lib/script_test.sh'); Example shell code: #!/bin/bash echo "Start..." for i in {1..10} do echo "$i..." sleep 1 done echo "Done."

    Read the article

  • C++'s unordered_map / hash_map / Google's dense_hash - how to input binary data (buf+len) and insert

    - by shlomif
    Hi all, I have two questions about Google's dense_hash_map, which can be used instead of the more standard unordered_map or hash_map: How do I use an arbitrary binary data memory segment as a key: I want a buffer+length pair, which may still contain some NUL (\0) characters. I can see how I use a NUL-terminated char * string , but that's not what I want. How do I implement an operation where I look if a key exists, and if not - insert it and if it does return the pointer to the existing key and let me know what actually happened. I'd appreciate it if anyone can shed any light on this subject. Regards, -- Shlomi Fish

    Read the article

  • Begin Viewing Query Results Before Query Ends

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • Horizontally Flip a One Bit Bitmap Line

    - by roygbiv
    I'm looking for an algorithm to flip a 1 Bit Bitmap line horizontally. Remember these lines are DWORD aligned! I'm currently unencoding an RLE stream to an 8 bit-per-pixel buffer, then re-encoding to a 1 bit line, however, I would like to try and keep it all in the 1 bit space in an effort to increase its speed. Profiling indicates this portion of the program to be relatively slow compared to the rest. Example line (Before Flip): FF FF FF FF 77 AE F0 00 Example line (After Flip): F7 5E EF FF FF FF F0 00

    Read the article

  • XML Parsing Error: no element found

    - by Anilkumar
    XML Parsing Error: no element found Line Number 1, Column 1: I want to download a Excel file. I wrote the data in a html table format and appending to the response. The code is following : Response.Buffer = false; Response.Expires = 0; Response.Clear(); Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=" + FileName); Response.Write(str); Response.End();

    Read the article

  • C# Serial Communications - Received Data Lost

    - by Jim Fell
    Hello. Received data in my C# application is getting lost due to the collector array being over-written, rather than appended. private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { try { pUartData_c = serialPort1.ReadExisting().ToCharArray(); bUartDataReady_c = true; } catch ( System.Exception ex ) { MessageBox.Show(ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } In this example pUartData_c is over-written every time new data is received. On some systems this is not a problem because the data comes in quickly enough. However, on other systems data in the receive buffer is not complete. How can I append received data to pUartData_c, rather than over-write it. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • Handling of data truncation (short reads/writes) in FUSE

    - by Vi
    I expect any good program should do all their reads and writes in a loop until all data written/read without relying that write will write everything (even with regular files). Am I right? Implemented simple FUSE filesystem which only allows reading and writing with small buffers, very often returning that it is written less bytes that in a buffer (using -o direct_io). Some programs work, some not (notably mountlo). Are them buggy or programs should not expect truncated writes and reads from the regular files? In general, are seekable file descriptors expected to truncate data like sockets and pipes?

    Read the article

  • Performing regex on a stream

    - by takoi
    I have some large text files which im going to preform consecutive matching on (just capturing, not replacing). Im thinking its not such a good idea to keep the whole file in memory, but rather use a Reader. What i know about the input is that if there's a match, its not going to span more than 5 lines. So my idea was to have some sort of buffer which just keeps these 5 lines, or so, do the first search, and continue. But it has to "know" where the regex match ended for this to work. e.g if the match ends at line 2 it should start the next search from here. Is it possible to do something like this in an efficient way?

    Read the article

  • How i store the images pixels in matrix form?

    - by Rajendra Bhole
    Hi, I developing an application in which the pixelize image i want to be store in matrix format. The code is as follows. struct pixel { //unsigned char r, g, b,a; Byte r, g, b; int count; }; (NSInteger) processImage1: (UIImage*) image { // Allocate a buffer big enough to hold all the pixels struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); NSLog(@"1=%d, 2=%d, 3=%d", CGImageGetBitsPerComponent(image), CGImageGetBitsPerPixel(image),CGImageGetBytesPerRow(image)); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; I confusing about how to initialize the 2-D matrix in which the matrix store data of pixels.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >