Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 4/126 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • about buffer overflow

    - by Abed
    hello guys, I am new to the ethical hacking world, and one of the most important things is the stack overflow, anyway I coded a vulnerable C program which has a char name [400] statement, and when I try to run the program with 401A's it doesn't overflow, but the book which I am following says it must overflow and the logic sense says so, so what's wrong???

    Read the article

  • HTML5 video - Safari (Mac) - live HLS stream - seek in live buffer

    - by bsnote
    I have the following HTML: <head> </head> <body> <button type="button" onclick="onSeek();">Seek</button> <video id="video" src="http://<host>/somevideo.m3u8" autoplay controls> </video> <script src="http://code.jquery.com/jquery-1.7.2.min.js"></script> <script> function onSeek() { var video = $('#video')[0]; video.currentTime = -300; } </script> </body> When I open the page, the m3u8 playlist has already about 100 media segments (5 min). Since it's a live video it starts playing from one of the last media segments and video.currentTime is set to 0. video.seekable.start(i) and video.seekable.end(i) shows the seekable range which is 0..currentTime. How can I seek to the previous media segments, say 5 min back? video.currentTime = -300 doesn't work. It resets currentTime to 0.

    Read the article

  • How to write to the OpenGL Depth Buffer

    - by Mikepote
    I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene. So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room. So far I've tried creating a depth texture using: glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture. I've also tried: glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels); But this slows down my framerate to about 0.25 fps... I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?

    Read the article

  • Vim: How to create autocomplete/chooser from entries in buffer?

    - by Doug Avery
    Found out today that if you press Ctrl-Opt-Cmd-V in Textmate, it produces a cute little "chooser" dialogue in-place that allows you to page through your recent clipboard entries. It looks very similar to how CTRL-P and -N work in VIM, except it shows whole lines instead of simple word objects. It seems like this would be doable in VIM — it already has a buffer full of entries (the YankRing buffer, for example), and it already has a chooser, but I can't think of where I'd look to start putting these two together. Any ideas? (I know that YankRing already does this by opening a new window full of buffer content, but I wonder if there's a way to do it without all the window switching/closing/etc)

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • Is there anyway to programmably flush the buffer in log4net

    - by Henrik Stenbæk
    Hi I'm using log4net with AdoNetAppender. It's seems that the AdoNetAppender has a Flush method. Is there anyway I can call that from my code? I'm trying to create an admin page to view all the entries in the database log, and I will like to setup log4net with bufferSize=100 (or more), then I want the administrator to be able to click an button on the admin page to force log4net to write the buffered log entries to the database (without shutting down log4net). Is that possible?

    Read the article

  • Is there a buffer size attached to stdout?

    - by jameswelle
    I am trying to find some information on data limits related to stdout on Windows. I can't seem to find the information on MSDN. Is there a limit to how much data can be written to stdout? If so, what happens if the limit is reached? Is the data lost? If stdout is redirected (for example, by launching the process from .Net and using the ProcessStartInfo.RedirectStandardOutput property), does that have any effect on how much data can be written? As I read from the stdout stream in the calling process, does that affect the limitations? Are these limits related in any way to named pipes?

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • Drawing using Dynamic Array and Buffer Object

    - by user1905910
    I have a problem when creating the vertex array and the indices array. I don't know what really is the problem with the code, but I guess is something with the type of the arrays, can someone please give me a light on this? #define GL_GLEXT_PROTOTYPES #include<GL/glut.h> #include<iostream> using namespace std; #define BUFFER_OFFSET(offset) ((GLfloat*) NULL + offset) const GLuint numDiv = 2; const GLuint numVerts = 9; GLuint VAO; void display(void) { enum vertex {VERTICES, INDICES, NUM_BUFFERS}; GLuint * buffers = new GLuint[NUM_BUFFERS]; GLfloat (*squareVerts)[2] = new GLfloat[numVerts][2]; GLubyte * indices = new GLubyte[numDiv*numDiv*4]; GLuint delta = 80/numDiv; for(GLuint i = 0; i < numVerts; i++) { squareVerts[i][1] = (i/(numDiv+1))*delta; squareVerts[i][0] = (i%(numDiv+1))*delta; } for(GLuint i=0; i < numDiv; i++){ for(GLuint j=0; j < numDiv; j++){ //cada iteracao gera 4 pontos #define NUM_VERT(ii,jj) ((ii)*(numDiv+1)+(jj)) #define INDICE(ii,jj) (4*((ii)*numDiv+(jj))) indices[INDICE(i,j)] = NUM_VERT(i,j); indices[INDICE(i,j)+1] = NUM_VERT(i,j+1); indices[INDICE(i,j)+2] = NUM_VERT(i+1,j+1); indices[INDICE(i,j)+3] = NUM_VERT(i+1,j); } } glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glGenBuffers(NUM_BUFFERS, buffers); glBindBuffer(GL_ARRAY_BUFFER, buffers[VERTICES]); glBufferData(GL_ARRAY_BUFFER, sizeof(squareVerts), squareVerts, GL_STATIC_DRAW); glVertexPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0)); glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[INDICES]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } void init() { glClearColor(0.0, 0.0, 0.0, 0.0); gluOrtho2D((GLdouble) -1.0, (GLdouble) 90.0, (GLdouble) -1.0, (GLdouble) 90.0); } int main(int argv, char** argc) { glutInit(&argv, argc); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutInitWindowSize(500,500); glutInitWindowPosition(100,100); glutCreateWindow("myCode.cpp"); init(); glutDisplayFunc(display); glutMainLoop(); return 0; } Edit: The problem here is that drawing don't work at all. But I don't get any error, this just don't display what I want to display. Even if I put the code that make the vertices and put them in the buffers in a diferent function, this don't work. I just tried to do this: void display(void) { glBindVertexArray(VAO); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } and I placed the rest of the code in display in another function that is called on the start of the program. But the problem still

    Read the article

  • VIM Replace word with contents of paste buffer?

    - by plong
    I need to do a bunch of word replacements in a file and want to do it with a vi command, not an EX command such as :%s///g. I know that this is the typical way one replaces the word at the current cursor position: cw<text><esc> but is there a way to do this with the contents of the unnamed register as the replacement text and without overwriting the register?

    Read the article

  • @CodeStock 2012 Review: Rob Gillen ( @argodev ) - Anatomy of a Buffer Overflow Attack

    Anatomy of a Buffer Overflow AttackSpeaker: Rob GillenTwitter: @argodevBlog: rob.gillenfamily.net Honestly, this talk was over my head due to my lack of knowledge of low level programming, and I think that most of the other attendees would agree. However I did get the basic concepts that we was trying to get across. Fortunately most high level programming languages handle most of the low level concerns regarding preventing buffer overflow attacks. What I got from this talk was to validate all input data from external sources.

    Read the article

  • @CodeStock 2012 Review: Rob Gillen ( @argodev ) - Anatomy of a Buffer Overflow Attack

    Anatomy of a Buffer Overflow AttackSpeaker: Rob GillenTwitter: @argodevBlog: rob.gillenfamily.net Honestly, this talk was over my head due to my lack of knowledge of low level programming, and I think that most of the other attendees would agree. However I did get the basic concepts that we was trying to get across. Fortunately most high level programming languages handle most of the low level concerns regarding preventing buffer overflow attacks. What I got from this talk was to validate all input data from external sources.

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • sdl stencil buffer

    - by noddy
    I am trying to use the stencil buffer for rendering reflection and am working with SDL and OpenGL. When I give the command SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE,8),I get a return value of 0 indicating success,but when I try to get the size allocated using SDL_GL_GetAtribute( SDL_GL_STENCIL_SIZE,&i),I get a value of 0 for my stencil buffer due to which I am not getting the desired rendering. Can someone help me to correct my mistake? Is there some other initialization also required? Thanks

    Read the article

  • How can I set the buffer size for the underneath Socket UDP? C#

    - by Jack
    Hi all As we know for UDP receive, we use Socket.ReceiveFrom or UdpClient.receive Socket.ReceiveFrom accept a byte array from you to put the udp data in. UdpClient.receive returns directly a byte array where the data is My question is that How to set the buffer size inside Socket. I think the OS maintains its own buffer for receive UDP data, right? for e.g., if a udp packet is sent to my machine, the OS will put it to a buffer and wait us to Socket.ReceiveFrom or UdpClient.receive, right? How can I change the size of that internal buffer? I have tried Socket.ReceiveBuffSize, it has no effect at all for UDP, and it clearly said that it is for TCP window. Also I have done a lot of experiments which proves Socket.ReceiveBufferSize is NOT for UDP. Can anyone share some insights for UDP internal buffer??? Thanks

    Read the article

  • Memory is full with vertex buffer

    - by Christian Frantz
    I'm having a pretty strange problem that I didn't think I'd run into. I was able to store a 50x50 grid in one vertex buffer finally, in hopes of better performance. Before I had each cube have an individual vertex buffer and with 4 50x50 grids, this slowed down my game tremendously. But it still ran. With 4 50x50 grids with my new code, that's only 4 vertex buffers. With the 4 vertex buffers, I get a memory error. When I load the game with 1 grid, it takes forever to load and with my previous version, it started up right away. So I don't know if I'm storing chunks wrong or what but it stumped me -.- for (int x = 0; x < 50; x++) { for (int z = 0; z < 50; z++) { for (int y = 0; y <= map[x, z]; y++) { SetUpVertices(); SetUpIndices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z] - y, z), grass)); } } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionTexture), vertices.Count(), BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); indexBuffer = new IndexBuffer(device, typeof(short), indices.Count(), BufferUsage.WriteOnly); indexBuffer.SetData(indices.ToArray()); Thats how theyre stored. The array I'm reading from is a byte array which defines the coordinates of my map. Now with my old version, I used the same loading from an array so that hasn't changed. The only difference is the one vertex buffer instead of 2500 for a 50x50 grid. cubes is just a normal list that holds all my cubes for the vertex buffer. Another thing that just came to mind would be my draw calls. If I'm setting an effect for each cube in my cube list, that's probably going to take a lot of memory. How can I avoid doing this? I need the foreach method to set my cubes to the right position foreach (Cube block in cube.cubes) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(block.cubePosition); effect.World = center * scale * translate; effect.View = cam.view; effect.Projection = cam.proj; effect.FogEnabled = false; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); noc++; }

    Read the article

  • How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

    - by alejandro.vargas
    The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document. In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

    Read the article

  • What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource?

    - by sebf
    I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff™ with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?

    Read the article

  • Using multiple indexes with buffer objects in OpenTK

    - by Rushyo
    I've got multiple buffers in OpenGL holding data on position, normals and texcoords. I also have an equal number of buffers holding distinct index data for each of those buffers. I quite like this format (indvidual indexes for each buffer) utilised by COLLADA since it strikes me as optimally efficient at accessing each buffer. I've set up pointers to the relevant data arrays using VertexPointer, NormalPointer, etc however I have no way to assign pointers to the index buffers since DrawElements appear to only look at one ElementArrayBuffer. Can I utilise multiple indices some way or will I be better off using a different technique which can support this? I'd prefer to keep the distinct indices if at all possible.

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >