Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 11/191 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Dual Frame Buffer on Ubuntu 12.04 Intel HD Graphics 4600 i7-4770

    - by user3692512
    I have 2 monitors connected to the PC, one in HDMI, one in DVI. I have Intel integrated graphics HD4600 Now as far my understanding, both the monitors is connected at the same framebuffer /dev/fb0 How can I detach them and create 2 frame buffers at startup, so that I can directly write to the second monitor, by writing on the /dev/fb1, and not hamper the /dev/fb0, so that x-server can run normally on that?

    Read the article

  • Trending Buffer Pool Performance Using DMV sys.dm_os_performance_counters

    I'd seen you posted a tip on capturing SQL based PerfMon counters using sys.dm_os_performance_counters. What queries can I run against those stored results that would allow me to examine memory usage on my SQL instance? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • For buffer overflows, what is the stack address when using pthreads?

    - by t2k32316
    I'm taking a class in computer security and there is an extra credit assignment to insert executable code into a buffer overflow. I have the c source code for the target program I'm trying to manipulate, and I've gotten to the point where I can successfully overwrite the eip for the current function stack frame. However, I always get a Segmentation fault, because the address I supply is always wrong. The problem is that the current function is inside a pthread, and therefore, the address of the stack seems to always change between different runs of the program. Is there any method for finding the stack address within a pthread (or for estimating the stack address within a pthread)? (note: pthread_create's 2nd argument is null, so we're not manually assigning a stack address)

    Read the article

  • Is there a default buffer length for 'sprintf' method?

    - by Isuru
    Hi, I used sprintf method to format data to a string which I want to write to a file, in C++ console application using VS 2008. The Input is a particular message, which has various variables and values (ex: Type 'int' and Value '10' / Type string and value "abc", etc.) When I send a two messages it works perfectly. But When I send more than two messages it gives a runtime error saying 0xC0000005: Access violation reading location 0xabababab. Why is this happening? Is it because the method 'sprintf' has a default buffer length? How can I overcome this problem?

    Read the article

  • Shellcode for a simple stack overflow: Exploited program with shell terminates directly after execve

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. The execve syscall succeeds but afterwards it just terminates. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328 Debugging with gdb: $ python exploit.py > exploit.txt (Note: corrected stackpointer address in exploit.py for gdb) $ gdb buffer (gdb) run < exploit.txt Starting program: /home/henning/bo/buffer < exploit.txt Stackpointer 0x7fffffffe308 Jump to 0x7fffffffe308 process 4185 is executing new program: /bin/dash Program exited normally.

    Read the article

  • Buffer Overflow errors when reading ConfigDelay and Manufacturer info from registry

    - by peter
    Hi All, This is a strange driver error which doesn't make a lot of sense to me. I am running an application developed in C# .NET which our company develops. I was monitoring the application using process monitor and noticed that it accesses the registry a lot. The output on Process Monitor looks like this, Operation Result Path RegQueryValue Success HKLM\System\CurrentControlSet\Enum\SWMUXBUS\SW_MODEM\7&6c4af30&0&5&0004\Driver RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Properties RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Default RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\InactivityScale RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\PowerDelay RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\ConfigDelay RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Manufacturer RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Model RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Version The app is reading this stuff every 5 seconds from the registry, so I would ask a few questions, 1) What is this stuff 2) Why is the app reading this stuff 3) Why is it saying 'Buffer Overflow' 4) Could this cause performance problems for my app? From what I can see the app does not explicitly read this stuff, so I think this relates to a driver on the machine (which is a netbook)

    Read the article

  • ADT-like polymorphism in Java (without altering class)

    - by ffriend
    In Haskell I can define following data type: data Tree = Empty | Leaf Int | Node Tree Tree and then write polymorphic function like this: depth :: Tree -> Int depth Empty = 0 depth (Leaf n) = 1 depth (Node l r) = 1 + max (depth l) (depth r) In Java I can emulate algebraic data types with interfaces: interface Tree {} class Empty implements Tree {} class Leaf implements Tree { int n; } class Node implements Tree { Tree l; Tree r; } But if I try to use Haskell-like polymorphism, I get an error: int depth(Empty node) { return 0; } int depth(Leaf node) { return 1; } int depth(Node node) { return 1 + Math.max(depth(node.l), depth(node.r)); // ERROR: Cannot resolve method 'depth(Tree)' } Correct way to overcome this is to put method depth() to each class. But what if I don't want to put it there? For example, method depth() may be not directly related to Tree and adding it to class would break business logic. Or, even worse, Tree may be written in 3rd party library that I don't have access to. In this case, what is the simplest way to implement ADT-like polymorpism? Just in case, for the moment I'm using following syntax, which is obviously ill-favored: int depth(Tree tree) { if (tree instanceof Empty) depth((Empty)tree) if (tree instanceof Leaf) depth((Leaf)tree); if (tree instanceof Node) depth((Node)tree); else throw new RuntimeException("Don't know how to find depth of " + tree.getClass()); }

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • emacs: How to intelligently handle buffer-modified when setting text properties?

    - by Cheeso
    The documentation on Text Properties says: Since text properties are considered part of the contents of the buffer (or string), and can affect how a buffer looks on the screen, any change in buffer text properties marks the buffer as modified. First, I don't understand that policy. Can anyone explain? The text props are not actually saved in the file, when the buffer is saved. So why mark the buffer as modified? For me, buffer-modified indicates "some changes have not yet been saved." but understanding the policy is just for my own amusement. More importantly, is there an already-established way that, in code, I can change syntax text properties on the text in a buffer, while keeping the buffer-modified flag set to whatever it was, prior to those changes? I'm thinking of something like save-excursion. It would be pretty easy to write, but this seems like a common case and I'd like to use the standard function, if possible. For more on the scenario - I have a mode that does a full text scan and sets syntax-tabe properties on the text. After opening a buffer, the scan runs, but it results in a buffer with buffer-modified set to t . As always, thanks.

    Read the article

  • C# Sockets Buffer Overflow No Error

    - by Michael Covelli
    I have one thread that is receiving data over a socket like this: while (sock.Connected) { // Receive Data (Block if no data) recvn = sock.Receive(recvb, 0, rlen, SocketFlags.None, out serr); if (recvn <= 0 || sock == null || !sock.Connected) { OnError("Error In Receive, recvn <= 0 || sock == null || !sock.Connected"); return; } else if (serr != SocketError.Success) { OnError("Error In Receive, serr = " + serr); return; } // Copy Data Into Tokenizer tknz.Read(recvb, recvn); // Parse Data while (tknz.MoveToNext()) { try { ParseMessageAndRaiseEvents(tknz.Buffer(), tknz.Length); } catch (System.Exception ex) { string BadMessage = ByteArrayToStringClean(tknz.Buffer(), tknz.Length); string msg = string.Format("Exception in MDWrapper Parsing Message, Ex = {0}, Msg = {1}", ex.Message, BadMessage); OnError(msg); } } } And I kept seeing occasional errors in my parsing function indicating that the message wasn't valid. At first, I thought that my tokenizer class was broken. But after logging all the incoming bytes to the tokenizer, it turns out that the raw bytes in recvb weren't a valid message. I didn't think that corrupted data like this was possible with a tcp data stream. I figured it had to be some type of buffer overflow so I set sock.ReceiveBufferSize = 1024 * 1024 * 8; and the parsing error never, ever occurs in testing (it happens often enough to replicate if I don't change the ReceiveBufferSize). But my question is: why wasn't I seeing an exception or an error state or something if the socket's internal buffer was overflowing before I changed this buffer size?

    Read the article

  • How can I write only to the stencil buffer in OpenGL ES 2.0?

    - by stephelton
    I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no-op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.

    Read the article

  • Problem reading hexadecimal buffer from C socket

    - by Olaseni
    I'm using the SDL_net sockets API to create a server and client. I can easily read a string buffer, but when I try to send hexadecimal data, recv gets the length, but I cannot seem to be a able to read the buffer contents. IPaddress ip; TCPsocket server,client; int bufSize = 1024; char message[bufSize]; int len; server = SDLNet_TCP_Open(&ip); client = SDLNet_TCP_Accept(server); len = SDLNet_TCP_Recv(client,message,bufSize); Here's a snippet. the buffer length "len" is set (i.e. message length) but I can't get to the data contents in the message buffer. Some sample bind_transmitter PDU data was sent by a random client to the server at that port. I can't read the PDU (SMPP).

    Read the article

  • Using fscanf with dynamically allocated buffer.

    - by ryyst
    Hi, I got the following code: char buffer[2047]; int charsRead; do { if(fscanf(file, "%2047[^\n]%n%*c", buffer, &charsRead) == 1) { // Do something } } while (charsRead == 2047); I wanted to convert this code to use dynamically allocated variables so that when calling this code often I won't get heavy memory leakage. Thus, I tried this: char *buffer = malloc(sizeof(char) * 2047); int *charsRead = malloc(sizeof(int)); do { if(fscanf(file, "%2047[^\n]%n%*c", *buffer, charsRead) == 1) { // Do something } } while (*charsRead == 2047); Unfortunately, this does not work. I always get “EXC_BAD_ACCESS” errors, just before the if-statement with the fscanf call. What am I doing wrong? Thanks for any help! -- Ry

    Read the article

  • Read a buffer of unknown size (Console input)

    - by Sanarothe
    Hi. I'm a little behind in my X86 Asm class, and the book is making me want to shoot myself in the face. The examples in the book are insufficient and, honestly, very frustrating because of their massive dependencies upon the author's link library, which I hate. I wanted to learn ASM, not how to call his freaking library, which calls more of his library. Anyway, I'm stuck on a lab that requires console input and output. So far, I've got this for my input: input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP I need to use the input and output procedures multiple times, so I'm trying to make it abstract. I'm just not sure how to use the data that is set to eax here. My initial idea was to take that string array and manually crawl through it by adding 8 to the offset for each possible digit (Input is integer, and there's a little bit of processing) but this doesn't work out because I don't know how big the input actually is. So, how would you swap the string array into an integer that could be used? Full code: (Haven't done the integer logic or the instruction string output because I'm stuck here.) include c:/irvine/irvine32.inc .data inputHandle HANDLE ? outputHandle HANDLE ? buffer BYTE BufSize DUP(?),0,0 bytesRead DWORD ? str1 BYTE "Enter an integer:",0Dh, 0Ah str2 BYTE "Enter another integer:",0Dh, 0Ah str3 BYTE "The higher of the two integers is: " int1 WORD ? int2 WORD ? int3 WORD ? Buf = 80 .code main PROC call handle push str1 call output call input push str2 call output call input push str3 call output call input main EndP larger PROC Ret larger EndP output PROC INVOKE WriteConsole Ret output EndP handle PROC USES eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov inputHandle,eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov outputHandle,eax Ret handle EndP input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP END main

    Read the article

  • Drawing RAW buffer to CGBitmapContext

    - by Raj
    Hi all, I have a raw image buffer in the RGB format. I need to draw it to CGContext so that I get a new buffer of the format ARGB. I accomplish this in the following way: Create a data provider out of raw buffer using CGDataProviderCreateWithData and then create image out of the data provider with the api: CGImageCreate. Now if I write this image back to the CGBitmapContext using CGContextImageDraw. Instead of creating an intermediate image, is there any way of writing the buffer directly to CGContext so that I can avoid the image creation phase? Thanks

    Read the article

  • Create big buffer on a pic18f with microchip c18 compiler

    - by acemtp
    Using Microchip C18 compiler with a pic18f, I want to create a "big" buffer of 3000 bytes in the program data space. If i put this in the main() (on stack): char tab[127]; I have this error: Error [1300] stack frame too large If I put it in global, I have this error: Error - section '.udata_main.o' can not fit the section. Section '.udata_main.o' length=0x0000007f How to create a big buffer? Do you have tutorial on how to manage big buffer on pic18f with c18?

    Read the article

  • Visualize the depth buffer

    - by Thanatos
    I'm attempting to visualize the depth buffer for debugging purposes, by drawing it on top of the actual rendering when a key is pressed. It's mostly working, but the resulting image appears to be zoomed in. (It's not just the original image, in an odd grayscale) Why is it not the same size as the color buffer? This is what I'm using the view the depth buffer: void get_gl_size(int &width, int &height) { int iv[4]; glGetIntegerv(GL_VIEWPORT, iv); width = iv[2]; height = iv[3]; } void visualize_depth_buffer() { int width, height; get_gl_size(width, height); float *data = new float[width * height]; glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data); glDrawPixels(width, height, GL_LUMINANCE, GL_FLOAT, data); delete [] data; }

    Read the article

  • Rendering GDI components to a buffer or d3d texture

    - by Tim
    Hi, I'm trying to redirect the output of a GDI application to a buffer, preferably a d3d texture but I'll settle for a system memory buffer that I can then copy to a d3d texture. Specifically, I'm trying to get Google Chrome to render into a d3d buffer to be displayed in a d3d application. Are there any foolproof ways to do this or am I opening the mother of all worm-cans? Thanks, Tim.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >