Search Results

Search found 3140 results on 126 pages for 'stencil buffer'.

Page 18/126 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • How to set ReceiveBufferSize for UDPClient? or Does it make sense to set? C#

    - by Jack
    Hello all. I am implementing a UDP data transfer thing. I have several questions about UDP buffer. I am using UDPClient to do the UDP send / receive. and my broadband bandwidth is 150KB/s (bytes/s, not bps). I send out a 500B datagram out to 27 hosts 27 hosts send back 10KB datagram back if they receive. So, I should receive 27 responses, right? however, I only get averagely 8 - 12 instead. I then tried to reduce the size of the response down to 500B, yes, I receive all. A thought of mine is that if all 27 hosts send back 10KB response at almost same time, the incoming traffic will be 270KB/s (likely), that exceeds my incoming bandwidth so loss happens. Am I right? But I think even if the incoming traffic exceeds the bandwidth, is the Windows supposed to put the datagram in the buffer and wait for receive? I then suspect that maybe the ReceiveBufferSize of my UdpClient is too small? by default, it is 8092B?? I don't know whether I am all right at these points. Please give me some help.

    Read the article

  • How can I eliminate latency in quicktime streamed video

    - by JJFeiler
    I'm prototyping a client that displays streaming video from a HaiVision Barracuda through a quicktime client. I've been unable to reduce the buffer size below 3.0 seconds... for this application, we need as low a latency as the network allows, and prefer video dropouts to delay. I'm doing the following: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { NSString *path = [[NSBundle mainBundle] pathForResource:@"haivision" ofType:@"sdp"]; NSError *error = nil; QTMovie *qtmovie = [QTMovie movieWithFile:path error:&error]; if( error != nil ) { NSLog(@"error: %@", [error localizedDescription]); } Movie movie = [qtmovie quickTimeMovie]; long trackCount = GetMovieTrackCount(movie); Track theTrack = GetMovieTrack(movie,1); Media theMedia = GetTrackMedia(theTrack); MediaHandler theMediaHandler = GetMediaHandler(theMedia); QTSMediaPresentationParams myPres; ComponentResult c = QTSMediaGetIndStreamInfo(theMediaHandler, 1,kQTSMediaPresentationInfo, &myPres); Fixed shortdelay = 1<<15; OSErr theErr = QTSPresSetInfo (myPres.presentationID, kQTSAllStreams, kQTSTargetBufferDurationInfo, &shortdelay ); NSLog(@"OSErr %d", theErr); [movieView setMovie:qtmovie]; [movieView play:self]; } I seem to be getting valid objects/structures all the way down to the QTSPres, though the ComponentResult and OSErr are both returning -50. The streaming video plays fine, but the buffer is still 3.0seconds. Any help/insight appreciated. J

    Read the article

  • Is there a less painful way to GetBytes for a buffer not starting at 0?

    - by Earlz
    I am having to deal with raw bites in a project and I need to basically do something like this byte[] ToBytes(){ byte[] buffer=new byte[somelength]; byte[] tmp=new byte[2]; tmp=BitConverter.GetBytes(SomeShort); buffer[0]=tmp[0]; buffer[1]=tmp[1]; tmp=BitConverter.GetBytes(SomeOtherShort); buffer[2]=tmp[0]; buffer[3]=tmp[1]; } I feel like this is so wrong yet I can't find any better way of doing it. Is there an easier way?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • How do I make C++/wxWidgets code accessible to a wxPython app?

    - by Jon Cage
    I have a code library which is written in C++ and makes extensive use of the wxWidgets library. I'm now trying to wrap my library (currently using SWIG) so that it's callable from wxPython, but I've hit a wall: ------ Build started: Project: MyLibLib, Configuration: Release_SWIG_OutputForBin Win32 ------ Performing Custom Build Step In order to function correctly, please ensure the following environment variables are correctly set: PYTHON_INCLUDE: C:\Python26\include PYTHON_LIB: C:\Python26\libs\python26.lib d:\MyProject\Software\MyLib\trunk\MyLib>C:\swigwin-2.0.0\swig.exe -python d:\MyProject\Software\MyLib\trunk\MyLibLib\MyLib.i d:\MyProject\Software\MyLib\trunk\MyLib>if errorlevel 1 goto VCReportError d:\MyProject\Software\MyLib\trunk\MyLib>goto VCEnd Compiling... MyLib_wrap.c C:\wxWidgets-2.8.10\include\wx/wxchar.h(886) : warning C4273: '_snprintf' : inconsistent dll linkage c:\Program Files\Microsoft Visual Studio 9.0\VC\include\stdio.h(358) : see previous definition of '_snprintf' C:\wxWidgets-2.8.10\include\wx/buffer.h(127) : error C2061: syntax error : identifier 'wxCharBuffer' C:\wxWidgets-2.8.10\include\wx/buffer.h(127) : error C2059: syntax error : ';' C:\wxWidgets-2.8.10\include\wx/buffer.h(127) : error C2449: found '{' at file scope (missing function header?) C:\wxWidgets-2.8.10\include\wx/buffer.h(127) : error C2059: syntax error : '}' C:\wxWidgets-2.8.10\include\wx/buffer.h(127) : error C2059: syntax error : ')' C:\wxWidgets-2.8.10\include\wx/buffer.h(129) : error C2061: syntax error : identifier 'wxWritableCharBuffer' C:\wxWidgets-2.8.10\include\wx/buffer.h(129) : error C2059: syntax error : ';' C:\wxWidgets-2.8.10\include\wx/buffer.h(129) : error C2059: syntax error : ':' C:\wxWidgets-2.8.10\include\wx/buffer.h(134) : error C2061: syntax error : identifier 'wxWCharBuffer' C:\wxWidgets-2.8.10\include\wx/buffer.h(134) : error C2059: syntax error : ';' C:\wxWidgets-2.8.10\include\wx/buffer.h(134) : error C2449: found '{' at file scope (missing function header?) C:\wxWidgets-2.8.10\include\wx/buffer.h(134) : error C2059: syntax error : '}' C:\wxWidgets-2.8.10\include\wx/buffer.h(134) : fatal error C1004: unexpected end-of-file found Build log was saved at "file://d:\MyProject\Software\MyLib\trunk\MyLib\Release_SWIG_OutputForBin\BuildLog.htm" MyLibLib - 13 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Is there a particular way I should be going about this? I spent some time googling for similar errors, but got none which suggests I'm probably barking up the wrong tree here...? [Edit] Is a dll and ctypes the answer?

    Read the article

  • OutOfMemoryException when I read FileStream 500 MB

    - by Alhambra Eidos
    Hi all, I'm using Filestream for read big file ( 500 MB) and I get the OutOfMemoryException. Any solutions about it. My Code is: using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read)) { byte[] b2 = ReadFully(fs3, 1024); } public static byte[] ReadFully(Stream stream, int initialLength) { // If we've been passed an unhelpful initial length, just // use 32K. if (initialLength < 1) { initialLength = 32768; } byte[] buffer = new byte[initialLength]; int read = 0; int chunk; while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0) { read += chunk; // If we've reached the end of our buffer, check to see if there's // any more information if (read == buffer.Length) { int nextByte = stream.ReadByte(); // End of stream? If so, we're done if (nextByte == -1) { return buffer; } // Nope. Resize the buffer, put in the byte we've just // read, and continue byte[] newBuffer = new byte[buffer.Length * 2]; Array.Copy(buffer, newBuffer, buffer.Length); newBuffer[read] = (byte)nextByte; buffer = newBuffer; read++; } } // Buffer is now too big. Shrink it. byte[] ret = new byte[read]; Array.Copy(buffer, ret, read); return ret; } thanks in advanced,

    Read the article

  • Printf in assembler doesn't print

    - by Gaim
    Hi there, I have got a homework to hack program using buffer overflow ( with disassambling, program was written in C++, I haven't got the source code ). I have already managed it but I have a problem. I have to print some message on the screen, so I found out address of printf function, pushed address of "HACKED" and address of "%s" on the stack ( in this order ) and called that function. Called code passed well but nothing had been printed. I have tried to simulate the environment like in other place in the program but there has to be something wrong. Do you have any idea what I am doing wrong that I have no output, please? Thanks a lot EDIT: This program is running on Windows XP SP3 32b, written in C++, Intel asm there is the "hack" code CPU Disasm Address Hex dump Command Comments 0012F9A3 90 NOP ;hack begins 0012F9A4 90 NOP 0012F9A5 90 NOP 0012F9A6 89E5 MOV EBP,ESP 0012F9A8 83EC 7F SUB ESP,7F ;creating a place for working data 0012F9AB 83EC 7F SUB ESP,7F 0012F9AE 31C0 XOR EAX,EAX 0012F9B0 50 PUSH EAX 0012F9B1 50 PUSH EAX 0012F9B2 50 PUSH EAX 0012F9B3 89E8 MOV EAX,EBP 0012F9B5 83E8 09 SUB EAX,9 0012F9B8 BA 1406EDFF MOV EDX,FFED0614 ;address to jump, it is negative because there mustn't be 00 bytes 0012F9BD F7DA NOT EDX 0012F9BF FFE2 JMP EDX ;I have to jump because there are some values overwritten by the program 0012F9C1 90 NOP 0012F9C2 0090 00000000 ADD BYTE PTR DS:[EAX],DL 0012F9C8 90 NOP 0012F9C9 90 NOP 0012F9CA 90 NOP 0012F9CB 90 NOP 0012F9CC 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9CD 65:6E OUTS DX,BYTE PTR GS:[ESI] ; I/O command 0012F9CF 67:74 68 JE SHORT 0012FA3A ; Superfluous address size prefix 0012F9D2 2069 73 AND BYTE PTR DS:[ECX+73],CH 0012F9D5 203439 AND BYTE PTR DS:[EDI+ECX],DH 0012F9D8 34 2C XOR AL,2C 0012F9DA 2066 69 AND BYTE PTR DS:[ESI+69],AH 0012F9DD 72 73 JB SHORT 0012FA52 0012F9DF 74 20 JE SHORT 0012FA01 0012F9E1 3120 XOR DWORD PTR DS:[EAX],ESP 0012F9E3 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9E4 696E 65 7300909 IMUL EBP,DWORD PTR DS:[ESI+65],-6F6FFF8D 0012F9EB 90 NOP 0012F9EC 90 NOP 0012F9ED 90 NOP 0012F9EE 31DB XOR EBX,EBX ; hack continues 0012F9F0 8818 MOV BYTE PTR DS:[EAX],BL ; writing 00 behind word "HACKED" 0012F9F2 83E8 06 SUB EAX,6 0012F9F5 50 PUSH EAX ; address of "HACKED" 0012F9F6 B8 3B8CBEFF MOV EAX,FFBE8C3B 0012F9FB F7D0 NOT EAX 0012F9FD 50 PUSH EAX ; address of "%s" 0012F9FE B8 FFE4BFFF MOV EAX,FFBFE4FF 0012FA03 F7D0 NOT EAX 0012FA05 FFD0 CALL EAX ;address of printf This code is really ugly because I am new in assembler and there mustn't be null bytes because of buffer-overflow bug

    Read the article

  • Java - is it possible to read a file line by line, stop, and then immediately start reading bytes wh

    - by hatorade
    I'm having an issue trying to parse the ascii part of a file, and once I hit the end tag, IMMEDIATELY start reading in the bytes from that point on. Everything I know in Java to read off a line or a whole word creates a buffer, which ruins any chance of getting the bytes immediately following my stop point. Is the only way to do this read in byte-by-byte, find new-lines, reconstruct everything prior to the new-line, see if it's my end tag, and go from there?

    Read the article

  • problem with pasting image over lines in wx DC

    - by Moayyad Yaghi
    hello .. i have the same code as wx Doodle pad in the wx demos i added a tool to paste images from the clipboard to the pad. using wx.DrawBitmap() function , but whenever i refresh the buffer .and call funtions ( drawSavedLines and drawSavedBitmap) it keeps putting bitmap above line no matter what i did even if i called draw bitmap first and then draw lines . is there anyway to put a line above the bitmap ? please inform me if i miss anything thanx in advance

    Read the article

  • How to tell GDB to flush the stdio of the program being debugged

    - by Yorkwar
    The stdio is usually buffered. When I hit a breakpoint and there's a printf before the breakpoint, the printed string may still be in the buffer and I can not see it. I know I can flush the stdio by adding some flush code in the program. Without doing this, is there any way to tell GDB to flush the stdio of the program being debugged after GDB stops? This way is more friendly when debugging a program.

    Read the article

  • opengl es render but dont show on display

    - by Sponge
    i have written a object selection algorithm which picks the objects by there color. i give every object an unique color and then i just have to use the glReadPixels method to check which object was selected this works fine and is really fast but the problem is that the frame is displayed on the screen with all the picking-colors so the screen flashes every time you select something. so my question is: how do i write everything in the correct display buffer but dont display it on the screen to avoid these flashes?

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

  • How to read the statistics in Media Player Classic?

    - by netvope
    I understand that the two numbers under bitrate are the average bitrate and the current bitrate of the stream. But what are the two numbers under buffers? I suppose the second one is the amount of data loaded in memory, but what is the first number? The amount of data decoded? Also, why are there a jitter and a sync offset? (For your reference, here stream 0-6 are video, audio track 1, audio track 2, subtitle track 1 and subtitle track 2.)

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • how to exploit vulnerability in php

    - by Dr Deo
    i have never seen a buffer overflow exploit in live action. supporse I have found a server that seems to have vulnerabilities. Where can i get proof of the concept code preferably in c/c++ to exploit the vulnerability? eg i found this vulnerability Multiple directory traversal vulnerabilities in functions such as 'posix_access()', 'chdir()', 'ftok()' may allow a remote attacker to bypass 'safe_mode' restrictions. (CVE-2008-2665 and CVE-2008-2666). How can i get proof of concept code for educational purposes PS I am a student and my only desire is to learn

    Read the article

  • recv returns old data

    - by anon
    This loop is supposed to take data from a socket line by line and put it in a buffer. For some reason, when there is no new data to return, recv returns the last couple lines it got. I was able to stop the bug by commenting out the first recv, but then I cant tell how long the next line will be. I know it's not a while(this->connected){ memset(buf, '\0', sizeof(buf)); recv(this->sock, buf, sizeof(buf), MSG_PEEK); //get length of next message ptr = strstr(buf, "\r\n"); if (ptr == NULL) continue; err = recv(this->sock, buf, (ptr-buf), NULL); //get next message printf("--%db\n%s\n", err, buf); tok[0] = strtok(buf, " "); for(i=1;tok[i-1]!=NULL;i++) tok[i] = strtok(NULL, " "); //do more stuff }

    Read the article

  • Storing Shell Output

    - by Emil Radoncik
    Hello everybody, I am trying to read the output of a shell command into a string buffer, the reading and adding the values is ok except for the fact that the added values are every second line in the shell output. for example, I have 10 rows od shell output and this code only stores the 1, 3, 5, 7, 9, row . Can anyone point out why i am not able to catch every row with this code ??? any suggestion or idea is welcomed :) import java.io.*; public class Linux { public static void main(String args[]) { try { StringBuffer s = new StringBuffer(); Process p = Runtime.getRuntime().exec("cat /proc/cpuinfo"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); while (input.readLine() != null) { //System.out.println(line); s.append(input.readLine() + "\n"); } System.out.println(s.toString()); } catch (Exception err) { err.printStackTrace(); } } }

    Read the article

  • how to exploit vulnerability of php?

    - by Dr Deo
    i have never seen a buffer overflow exploit in live action. supporse I have found a server that seems to have vulnerabilities. Where can i get proof of the concept code preferably in c/c++ to exploit the vulnerability? eg i found this vulnerability Multiple directory traversal vulnerabilities in functions such as 'posix_access()', 'chdir()', 'ftok()' may allow a remote attacker to bypass 'safe_mode' restrictions. (CVE-2008-2665 and CVE-2008-2666). How can i get proof of concept code for educational purposes PS I am a student and my only desire is to learn

    Read the article

  • Right multi object dependance design

    - by kenny
    I need some help with a correct design. I have a class called BufferManager. The push() method of this class reads data from a source and pushes it to a buffer repeatedly until there is no more data left in the source. There are also consumer threads that read data from this buffer, as soon as new data arrives. There is an option to sort the data before it comes to buffer. What I do right now is that BufferManager, instead of pushing data to the buffer, pushes it to another "sorting" buffer and starts a sorting thread. SorterManager class reads the data, sorts it in files and push()es the sorted data into the buffer. There will be a bottleneck (I use merge sort with files) but this is something I can't avoid. This is a bad design, because both BufferManager and SorterManager push data to a buffer (that consumers read from). I think only BufferManager should do it. How can I design it?

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >