Search Results

Search found 3117 results on 125 pages for 'z buffer'.

Page 7/125 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Windows XP SP3 TCP/IP No buffer space available

    - by Natalia
    I have the exactly same problem as here: Windows XP TCP/IP No buffer space available On Windows XP Pro, SP3 if one does an experiment where one tries to open TCP/IP sockets in a loop (bascially, listen port 7000, listen port 7001, etc.) After approx 649 open sockets, one will start getting errors: No buffer space available (maximum connections reached?) I've tried to edit the registry as described here http://smallvoid.com/article/winnt-tcpip-max-limit.html I set MaxUserPort = 65534 and MaxFreeTcbs = 2000, but it didn't help. What else can I do? I need 1000 server sockets. Here is the error stack: 05.04.2012 10:23:57 java.net.SocketException: No buffer space available (maximum connections reached?): listen at sun.nio.ch.ServerSocketChannelImpl.listen(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:127) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at channelserver.NIOAppServer.initSelector(NIOAppServer.java:40) at channelserver.NIOAppServer.(NIOAppServer.java:27) at channelserver.NIOServer.main(NIOServer.java:433) at channelserver.NIOServer.main(NIOServer.java:438)

    Read the article

  • TCP Windows Size vs Socket Buffer Size on Windows

    - by Patrick L
    I am new to Windows networking. When people talk about TCP tuning on Windows platform, they always mention about TCP Window Size. I am wondering whether Windows uses the concept of "Socket Buffer Size"? On Windows XP, the TCP window size is fixed. We can set it using the TCPWindowSize registry value. How about Socket Buffer Size? How can we set Socket Buffer size on Windows? Can we set it to a value different from TCP window size?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • Python - network buffer handling question...

    - by Patrick Moriarty
    Hi, I want to design a game server in python. The game will mostly just be passing small packets filled with ints, strings, and bytes stuffed into one message. As I'm using a different language to write the game, a normal packet would be sent like so: Writebyte(buffer, 5); // Delimit type of message Writestring(buffer, "Hello"); Sendmessage(buffer, socket); As you can see, it writes the bytes to the buffer, and sends the buffer. Is there any way to read something like this in python? I am aware of the struct module, and I've used it to pack things, but I've never used it to actually read something with mixed types stuck into one message. Thanks for the help.

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Winsock WSAAsyncSelect sending without an infinite buffer

    - by Xexr
    Hi, This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes. I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc. My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again. This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer. So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends. Hope I explained that well enough, thanks all!

    Read the article

  • How to change internal buffer size of DataInputStream

    - by Gaks
    I'm using this kind of code for my TCP/IP connection: sock = new Socket(host, port); sock.setKeepAlive(true); din = new DataInputStream(sock.getInputStream()); dout = new DataOutputStream(sock.getOutputStream()); Then, in separate thread I'm checking din.available() bytes to see if there are some incoming packets to read. The problem is, that if a packet bigger than 2048 bytes arrives, the din.available() returns 2048 anyway. Just like there was a 2048 internal buffer. I can't read those 2048 bytes when I know it's not the full packet my application is waiting for. If I don't read it however - it'll all stuck at 2048 bytes and never receive more. Can I enlarge the buffer size of DataInputStream somehow? Socket receive buffer is 16384 as returned by sock.getReceiveBufferSize() so it's not the socket limiting me to 2048 bytes. If there is no way to increase the DataInputStream buffer size - I guess the only way is to declare my own buffer and read everything from DataInputStream to that buffer? Regards

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • how to double buffer in multiple classes with java

    - by kdavis8
    I am creating a Java 2D video game. I can load graphics just fine, but when it gets into double buffering I have issues. My source code package myPackage; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; public class GameView extends JFrame { private BufferedImage backbuffer; private Graphics2D g2d; public GameView() { setBounds(0, 0, 500, 500); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); backbuffer = new BufferedImage(getHeight(), getWidth(), BufferedImage.TYPE_INT_BGR); g2d = backbuffer.createGraphics(); Toolkit tk = Toolkit.getDefaultToolkit(); Image img = tk.getImage(this.getClass().getResource("cage.png")); g2d.setColor(Color.red); //g2d.drawString("Hello",100,100); g2d.drawImage(img, 100, 100, this); repaint(); } public static void main(String args[]) { new GameView(); } public void paint(Graphics g) { g2d = (Graphics2D)g; g2d.drawImage(backbuffer, 0, 0, this); } }

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

  • Drawing multiple objects from one Vertex Buffer Object in OpenGL/OpenTK

    - by stoney78us
    I am trying to experimenting drawing method using VBO in OpenGL. Many people normally use 1 vbo to store one object data array. I was trying to do something quite opposite which is storing multiple object data into 1 vbo then drawing it. There is story behind why i want to do this. I want to group many of objects as a single object sometime. However my code doesn't do the justice. Following is my pseudo code: //Data double[] vertices = {line strip 1, line strip 2, line strip 3}; //series of vertices int linestrip1offset = index of the first vertex in line strip 1; int linestrip2offset = index of the first vertex in line strip 2; int linestrip3offset = index of the first vertex in line strip 3; int linestrip1VertexNum = number of vertices in linestrip 1; int linestrip2VertexNum = number of vertices in linestrip 2; int linestrip3VertexNum = number of vertices in linestrip 3; //Setting Up void init() { int[] vBO = new int[1]; GL.GenBuffer(1, vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(_vertices.Length * sizeof(double)), _vertices, BufferUsageHint.StaticDraw); GL.EnableClientState(Array.VertexArray); } //Drawing void draw() { GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.EnableClientState(ArrayCap.VertexArray); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip1offset); //drawing first linestrip GL.DrawArrays(drawMode, linestrip1offset , linestrip1VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip2offset); //drawing second linestrip GL.DrawArrays(drawMode, linestrip2offset , linestrip2VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip3offset); //drawing third linestrip GL.DrawArrays(drawMode, linestrip3offset , linestrip3VertexNum ); GL.DisableClientState(ArrayCap.VertexArray); GL.BindBuffer(BufferTarget.ArrayBuffer, 0); } I don't know what i did wrong but i think technically it should work where we can tell OpenGL which part of the data in the vBO to be drawn.

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • Dual Frame Buffer on Ubuntu 12.04 Intel HD Graphics 4600 i7-4770

    - by user3692512
    I have 2 monitors connected to the PC, one in HDMI, one in DVI. I have Intel integrated graphics HD4600 Now as far my understanding, both the monitors is connected at the same framebuffer /dev/fb0 How can I detach them and create 2 frame buffers at startup, so that I can directly write to the second monitor, by writing on the /dev/fb1, and not hamper the /dev/fb0, so that x-server can run normally on that?

    Read the article

  • Trending Buffer Pool Performance Using DMV sys.dm_os_performance_counters

    I'd seen you posted a tip on capturing SQL based PerfMon counters using sys.dm_os_performance_counters. What queries can I run against those stored results that would allow me to examine memory usage on my SQL instance? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • For buffer overflows, what is the stack address when using pthreads?

    - by t2k32316
    I'm taking a class in computer security and there is an extra credit assignment to insert executable code into a buffer overflow. I have the c source code for the target program I'm trying to manipulate, and I've gotten to the point where I can successfully overwrite the eip for the current function stack frame. However, I always get a Segmentation fault, because the address I supply is always wrong. The problem is that the current function is inside a pthread, and therefore, the address of the stack seems to always change between different runs of the program. Is there any method for finding the stack address within a pthread (or for estimating the stack address within a pthread)? (note: pthread_create's 2nd argument is null, so we're not manually assigning a stack address)

    Read the article

  • Is there a default buffer length for 'sprintf' method?

    - by Isuru
    Hi, I used sprintf method to format data to a string which I want to write to a file, in C++ console application using VS 2008. The Input is a particular message, which has various variables and values (ex: Type 'int' and Value '10' / Type string and value "abc", etc.) When I send a two messages it works perfectly. But When I send more than two messages it gives a runtime error saying 0xC0000005: Access violation reading location 0xabababab. Why is this happening? Is it because the method 'sprintf' has a default buffer length? How can I overcome this problem?

    Read the article

  • Shellcode for a simple stack overflow: Exploited program with shell terminates directly after execve

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. The execve syscall succeeds but afterwards it just terminates. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328 Debugging with gdb: $ python exploit.py > exploit.txt (Note: corrected stackpointer address in exploit.py for gdb) $ gdb buffer (gdb) run < exploit.txt Starting program: /home/henning/bo/buffer < exploit.txt Stackpointer 0x7fffffffe308 Jump to 0x7fffffffe308 process 4185 is executing new program: /bin/dash Program exited normally.

    Read the article

  • Buffer Overflow errors when reading ConfigDelay and Manufacturer info from registry

    - by peter
    Hi All, This is a strange driver error which doesn't make a lot of sense to me. I am running an application developed in C# .NET which our company develops. I was monitoring the application using process monitor and noticed that it accesses the registry a lot. The output on Process Monitor looks like this, Operation Result Path RegQueryValue Success HKLM\System\CurrentControlSet\Enum\SWMUXBUS\SW_MODEM\7&6c4af30&0&5&0004\Driver RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Properties RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Default RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\InactivityScale RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\PowerDelay RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\ConfigDelay RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Manufacturer RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Model RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Version The app is reading this stuff every 5 seconds from the registry, so I would ask a few questions, 1) What is this stuff 2) Why is the app reading this stuff 3) Why is it saying 'Buffer Overflow' 4) Could this cause performance problems for my app? From what I can see the app does not explicitly read this stuff, so I think this relates to a driver on the machine (which is a netbook)

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >