Search Results

Search found 3117 results on 125 pages for 'z buffer'.

Page 2/125 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • who free's setvbuf buffer?

    - by Evan Teran
    So I've been digging into how the stdio portion of libc is implemented and I've come across another question. Looking at man setvbuf I see the following: When the first I/O operation occurs on a file, malloc(3) is called, and a buffer is obtained. This makes sense, your program should have a malloc in it for I/O unless you actually use it. My gut reaction to this is that libc will clean up its own mess here. Which I can only assume it does because valgrind reports no memory leaks (they could of course do something dirty and not allocate it via malloc directly... but we'll assume that it literally uses malloc for now). But, you can specify your own buffer too... int main() { char *p = malloc(100); setvbuf(stdio, p, _IOFBF, 100); puts("hello world"); } Oh no, memory leak! valgrind confirms it. So it seems that whenever stdio allocates a buffer on its own, it will get deleted automatically (at the latest on program exit, but perhaps on stream close). But if you specify the buffer explicitly, then you must clean it up yourself. There is a catch though. The man page also says this: You must make sure that the space that buf points to still exists by the time stream is closed, which also happens at program termination. For example, the following is invalid: Now this is getting interesting for the standard streams. How would one properly clean up a manually allocated buffer for them, since they are closed in program termination? I could imagine a "clean this up when I close flag" inside the file struct, but it get hairy because if I read this right doing something like this: setvbuf(stdio, 0, _IOFBF, 100); printf("hello "); setvbuf(stdio, 0, _IOLBF, 100); printf("world\n"); would cause 2 allocations by the standard library because of this sentence: If the argument buf is NULL, only the mode is affected; a new buffer will be allocated on the next read or write operation.

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • Usage of current-buffer in emacs?

    - by Zubair
    I'm using emacs and I have written a script which uses "current-buffer". However the emacs system doesn't recognise "current-buffer". When I try "M - x current-buffer" i get the response: no match : Any idea what I'm doing wrong?

    Read the article

  • function's return address is different from its supposed value, buffer overflow,

    - by ultrajohn
    Good day everyone! I’m trying to understand how buffer overflow works. I’m doing this for my project in a computer security course I’m taking. Right now, I’m in the process of determining the address of the function’s return address which I’m supposed to change to perform a buffer overflow attack. I’ve written a simple program based from an example I’ve read in the internet. What this program does is it creates an integer pointer that will be made to point to the address of the function return address in the stack. To do this, (granted I understand how a function/program variables get organized in the stack), I add 8 to the buffer variable’ address and set it as the value of ret. I’m not doing anything here that would change the address contained in the location of func’s return address. here's the program: Output of the program when gets excecuted: As you can see, I’m printing the address of the variables buffer and ret. I’ve added an additional statement printing the value of the ret variable (supposed location of func return address, so this should print the address of the next instruction which will get executed after func returns from execution). Here is the dump which shows the supposed address of the instruction to be executed after func returns. (Underlined in green) As you can see, that value is way different from the value printed contained in the variable ret. My question is, why are they different? (of course in the assumption that what I’ve done are all right). Else, what have I done wrong? Is my understanding of the program’s runtime stack wrong? Please, help me understand this. My project is due nextweek and I’ve barely touched it yet. I’m sorry if I’m being demanding, I badly need your help.

    Read the article

  • C# Application crashes with Buffer Overrun in deployed (.exe) version, but not in Visual Studio

    - by Ben
    Hi, I have a c# Windows Forms application that runs perfectly from within Visual Studio, but crashes when its deployed and run from the .exe. It crashes with a Buffer Overrun error...and its pretty clear that this error is not being thrown from within my code. Instead, windows must be detecting some sort of buffer overrun and shutting down the application from the outside. I don't think there's one specific line of code that is causing it..it simply happens intermittently. Does anybody have any thoughts on what the possible causes of a Buffer Overrun error might be, and why it would only occur in the deployed application and not when run from with Visual Studio? Thanks in advance, Ben

    Read the article

  • Buffer management for socket application best practice

    - by Poni
    Having a Windows IOCP app............ I understand that for async i/o operation (on network) the buffer must remain valid for the duration of the send/read operation. So for each connection I have one buffer for the reading. For sending I use buffers to which I copy the data to be sent. When the sending operation completes I release the buffer so it can be reused. So far it's nice and not of a big issue. What remains unclear is how do you guys do this? Another thing is that even when having things this way, I mean multi-buffers, the receiver side might be flooded (talking from experience) with data. Even setting SO_RCVBUF to 25MB didn't help in my testings. So what should I do? Have a to-be-sent queue?

    Read the article

  • How much buffer does NetworkStream and TcpClient have?

    - by Earlz
    Hello, We are writing a TCPServer and Client program. How much space is there in the TcpClient buffer? Like, at what point will it begin to throw away data? We are trying to determine if the TcpClient can be blocking or if it should go into it's own background thread(so that the buffer can not get full)..

    Read the article

  • Buffer size: N*sizeof(type) or sizeof(var)? C++

    - by flyout
    I am just starting with cpp and I've been following different examples to learn from them, and I see that buffer size is set in different ways, for example: char buffer[255]; StringCchPrintf(buffer, sizeof(buffer), TEXT("%s"), X); VS char buffer[255]; StringCchPrintf(buffer, 255*sizeof(char), TEXT("%s"), X); Which one is the correct way to use it? I've seen this in other functions like InternetReadFile, ZeroMemory and MultiByteToWideChar.

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Need help with buffer overrun.

    - by Morinar
    I've got a buffer overrun I absolutely can't see to figure out (in C). First of all, it only happens maybe 10% of the time or so. The data that it is pulling from the DB each time doesn't seem to be all that much different between executions... at least not different enough for me to find any discernible pattern as to when it happens. The exact message from Visual Studio is this: A buffer overrun has occurred in hub.exe which has corrupted the program's internal state. Press Break to debug the program or Continue to terminate the program. For more details please see Help topic 'How to debug Buffer Overrun Issues'. If I debug, I find that it is broken in __report_gsfailure() which I'm pretty sure is from the /GS flag on the compiler and also signifies that this is an overrun on the stack rather than the heap. I can also see the function it threw this on as it was leaving, but I can't see anything in there that would cause this behavior, the function has also existed for a long time (10+ years, albeit with some minor modifications) and as far as I know, this has never happened. I'd post the code of the function, but it's decently long and references a lot of proprietary functions/variables/etc. I'm basically just looking for either some idea of what I should be looking for that I haven't or perhaps some tools that may help. Unfortunately, nearly every tool I've found only helps with debugging overruns on the heap, and unless I'm mistaken, this is on the stack. Thanks in advance.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • leak in fgets when assigning to buffer

    - by monkeyking
    I'm having problems understanding why following code leaks in one case, and not in the other case. The difference is while(NULL!=fgets(buffer,length,file))//doesnt leak while(NULL!=(buffer=fgets(buffer,length,file))//leaks I thought it would be the same. Full code below. #include <stdio.h> #include <stdlib.h> #define LENS 10000 void no_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=fgets(buffer,LENS,fp)){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); } void with_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=(buffer=fgets(buffer,LENS,fp))){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); }

    Read the article

  • Reading HttpURLConnection InputStream - manual buffer or BufferedInputStream?

    - by stormin986
    When reading the InputStream of an HttpURLConnection, is there any reason to use one of the following over the other? I've seen both used in examples. Manual Buffer: while ((length = inputStream.read(buffer)) > 0) { os.write(buf, 0, ret); } BufferedInputStream is = http.getInputStream(); bis = new BufferedInputStream(is); ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append(current); } EDIT I'm still new to HTTP in general but one consideration that comes to mind is that if I am using a persistent HTTP connection, I can't just read until the input stream is empty right? In that case, wouldn't I need to read the message length and just read the input stream for that length? And similarly, if NOT using a persistent connection, is the code I included 100% good to go in terms of reading the stream properly?

    Read the article

  • Reading HttpURLConnection InputStream - manual buffer or BufferedInputStream?

    - by stormin986
    When reading the InputStream of an HttpURLConnection, is there any reason to use one of the following over the other? I've seen both used in examples. Manual Buffer: while ((length = inputStream.read(buffer)) > 0) { os.write(buf, 0, ret); } BufferedInputStream is = http.getInputStream(); bis = new BufferedInputStream(is); ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append(current); } EDIT I'm still new to HTTP in general but one consideration that comes to mind is that if I am using a persistent HTTP connection, I can't just read until the input stream is empty right? In that case, wouldn't I need to read the message length and just read the input stream for that length? And similarly, if NOT using a persistent connection, is the code I included 100% good to go in terms of reading the stream properly?

    Read the article

  • How do I get the size of the boost buffer

    - by Anonymous
    I am trying to make an asynchronised server in visual studio and I use boost::asio::async_read(m_socket, boost::asio::buffer(m_buffer), boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error)); to get the buffer to be put in m_buffer boost::array<char, 256> m_buffer; but how do I get the size of this thing, m_buffer? size() didn't work, end() didn't work.. Any help would be fantastic. Thanks in advance.

    Read the article

  • [emacs] make ibuffer-visit-buffer behave like ido-switch-to-buffer?

    - by Stephen
    Is there a way to make ibuffer-visit-buffer behave like ido-switch-to-buffer (with raise-frame option)? If there is a window/frame containing the buffer I'd like emacs to take me there rather than opening the same buffer in the current window. I guess switch-to-buffer is remapped to ido-switch-to-buffer when ido-mode is turned on, so would doing something like that work in this case (remap ibuffer-visit-buffer to ido-switch-to-buffer)? Thanks

    Read the article

  • Problems with reading into buffer using boost::asio::async_read

    - by Max
    Good day. I have a Types.hpp file in my project. And within it i have: .... namespace RC { ..... ..... struct ViewSettings { .... }; ..... } In the Server.cpp file I'm including this Types.hpp file, and i have there: class Session { ..... RC::ViewSettings tmp; boost::asio::async_read(socket_, boost::asio::buffer(&tmp, sizeof(RC::ViewSettings)), boost::bind(&Session::Finish_Reading_Data, shared_from_this(), boost::asio::placeholders::error)); ..... } And during the compilation i have an errors: error C2825: 'F': must be a class or namespace when followed by '::' : see reference to class template instantiation 'boost::_bi::result_traits<R,F>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void) ] : see reference to class template instantiation 'boost::_bi::bind_t<R,F,L>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void), L=boost::_bi::list2<boost::_bi::value<boost::shared_ptr<Session>>,boost::arg<1>> ] error C2039: 'result_type' : is not a member of '`global namespace'' And the code like this works in proper way: int w; boost::asio::async_read(socket_, boost::asio::buffer(&w, sizeof(int)), boost::bind(&Session::Handle_Read_Width, shared_from_this(), boost::asio::placeholders::error)); Please, help. What's the problem here? Thanks in advance.

    Read the article

  • IndexOutOfRangeException when a stream is a multiple of the buffer size

    - by dnord
    I don't have a lot of experience with streams and buffers, but I'm having to do it for a project, and I'm stuck on an exception being thrown when the stream I'm reading is a multiple of the buffer size I've chosen. Let me show you: My code starts by reading bufferSize (100, let's say) bytes from the stream: numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); Then, I loop through a while loop: while (numberOfBytesRead == bufferSize) { BufferWriter.Write(output); BufferWriter.Flush(); index += bufferSize; numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); } ... and, once we get to a non-bufferSize read, we know we've hit the end of the stream and can move on. But if the bufferSize is 100, and the stream is 200, we'll read positions 0-99, 100-199, and then the attempt to read 200-299 errors out. I'd like it if it returned 0, but it throws an error. What I'm doing to handle that is, well, a try-catch: catch (System.IndexOutOfRangeException) numberOfBytesRead = 0; ...which ends the loop, and successfully finishes the thing, but we all know I don't want to control code flow with error handling. Is there a better (more standard?) way to handle stream reading when the stream length is unknown? This seems like a small wrinkle in a fairly reasonable strategy for reading streams, but I just don't know if I've got it wrong or what. The specifics of this (which I've cleaned up a little bit for posting) are a MySqlDataReader hitting a LARGEBLOB column. It's working whenever the buffer is larger than the number of returned bytes, or when the number of returned bytes is not a multiple of bufferSize. Because we don't, in that case, throw an IndexOutOfRangeException.

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • How to properly render a Frame Buffer to the BackBuffer in Stage3D / AGAL

    - by bigp
    After doing a render pass with RenderToTarget (RTT), how do you properly render that texture buffer to the screen while maintaining original scale / proportions so it doesn't stretch or lose quality? Can an AGAL VertexShader & FragmentShader be written so it's adaptable to any Texture size and Viewport dimensions? I find I'm getting some "blocky" effects in some of my first attempts at "ping-ponging" between two Texture buffers (to create trailing effects). Perhaps I'm not using the UVs correctly between the rendering-to-target and/or the backbuffer? Is there a simpler way just to "splash" the texture on the backbuffer, or is a Quad absolutely necessary (4 vertices, 2 triangles)? If it needs the Quad, should the Texture buffer be fully drawn (0.0 to 1.0 for vertical and horizontal UVs), or only a percentage of it should, like the example below? Texture Buffer U: 0.0 to viewport.width/texturebuffer.width; Texture Buffer V: 0.0 to viewport.height/texturebuffer.height; Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >