Search Results

Search found 3117 results on 125 pages for 'z buffer'.

Page 12/125 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • Java: how to avoid circual references when dumping object information with reflection?

    - by Tom
    I've modified an object dumping method to avoid circual references causing a StackOverflow error. This is what I ended up with: //returns all fields of the given object in a string public static String dumpFields(Object o, int callCount, ArrayList excludeList) { //add this object to the exclude list to avoid circual references in the future if (excludeList == null) excludeList = new ArrayList(); excludeList.add(o); callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i] == null) continue; buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if ((value.getClass().isPrimitive()) || (value.getClass() == java.lang.Long.class) || (value.getClass() == java.lang.String.class) || (value.getClass() == java.lang.Integer.class) || (value.getClass() == java.lang.Boolean.class)) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } catch (IllegalAccessException e) { System.out.println("IllegalAccessException: " + e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } The method is initially called like this: System.out.println(dumpFields(obj, 0, null); So, basically I added an excludeList which contains all the previousely checked objects. Now, if an object contains another object and that object links back to the original object, it should not follow that object further down the chain. However, my logic seems to have a flaw as I still get stuck in an infinite loop. Does anyone know why this is happening?

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

  • ORA-22835 using JPA (Buffer too small)

    - by Kenneth
    I am trying to persist an Entity with a @Lob annotated String field. The content of that fiels if bigger than the 40k buffer size limit. The first problem I had was related to the setString method used internally by the JPA implementation (Hibernate in my case) and the Oracle JDBC Driver. This problem was solved adding <property name="hibernate.connection.SetBigStringTryClob" value="true"/> to my persistence.xml file. Then, the error changed to a ORA-22835 error (the buffer is too small). ¿Is there any way that JPA solves this problem without going to a low-level implementation? ¿Any suggestions?

    Read the article

  • OpenGL Motion blur with the accumulation buffer in WxWidgets

    - by Klaus
    Hello, I'm trying to achieve a motion blur effect in my OpenGL application. I read somewhere this solution, using the accumulation buffer: glAccum(GL_MULT, 0.90); glAccum(GL_ACCUM, 0.10); glAccum(GL_RETURN, 1.0); glFlush(); at the end of the render loop. But nothing happens... What am I missing ? Additions after genpfault answer: Indeed I did not asked for an accumulation buffer when I initialized my context. So I tried to pass an array of attributes to the constructor of my wxGLCanvas, as described here: http://docs.wxwidgets.org/2.6/wx_wxglcanvas.html : int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , WX_GL_MIN_ACCUM_RED, WX_GL_MIN_ACCUM_GREEN, WX_GL_MIN_ACCUM_BLUE, 0} But all I get is a friendly Seg fault. Does someone understand how to use this ? (no problems with int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , 0})

    Read the article

  • how to flush the console buffer?

    - by DoronS
    Hi all, i have some code that run repetedly : printf("do you want to continue? Y/N: \n"); keepplaying = getchar(); in the next my code is running it doesnt wait for input. i found out that getchar in the seconed time use '\n' as the charcter. im gussing this is due to some buffer the sdio has, so it save the last input which was "Y\n" or "N\n". my Q is, how do i flush the buffer before using the getchar, which will make getchar wait for my answer?

    Read the article

  • evaluating buffer in emacs python-mode on remote host

    - by Adrian
    Hello, I'm using emacs23 with tramp to modify python scripts on a remote host. I found that when I start the python shell within emacs it starts up python on the remote host. My problem is that when I then try to call python-send-buffer via C-c C-c it comes up with the error Traceback (most recent call last): File "", line 1, in ? ImportError: No module named emacs Traceback (most recent call last): File "", line 1, in ? NameError: name 'emacs' is not defined Now, I must admit that I don't really know what's going on here. Is there a way for me to configure emacs so that I can evaluate the buffer on the remote host? Many thanks.

    Read the article

  • snipMate only working on empty buffer?

    - by JesseBuesking
    I'm attempting to use snipMate with sql files, however it doesn't seem to work when editing an existing file. If I create a new empty buffer (no file; e.g. launch gvim from the start menu), and set the filetype to sql (:set ft=sql), it works. However, if I then try to open a sql file (e.g. :e c:\blah.sql) and edit it, snipMate no longer works. What gives!? Setup: gvim vim 7.3 Windows 7 snipMate 0.84 Also, I do in fact have filetype plugin on in my .vimrc file. edit Apparently if I open an empty buffer, set the filetype to sql, then save to file using w c:\blah.sql, I now have a sql file open AND snipMate continues to work. edit Here's a gist of my current .vimrc in case it helps: https://gist.github.com/3946877

    Read the article

  • buffer overflow with boost::program_options

    - by f4
    Hello, I have a problem using boost:program_options this simple program, copy-pasted from boosts' documentation : #include <boost/program_options.hpp> int main( int argc, char** argv ) { namespace po = boost::program_options; po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; return 0; } fails with a buffer overflow. I have activated the "buffer security switch", and when I run it I get an "unknown exception (0xc0000409)" when I step over the line desc.add_options()... I use Visual Studio 2005 and boost 1.43.0. By the way it does run if I deactivate the switch but I don't feel comfortable doing so... unless it's possible to deactivate it locally. So do you have a solution to this problem? EDIT I found the problem I was linking against libboost_program_options-vc80-mt.lib which wasn't the good library.

    Read the article

  • How to signal a buffer full state between posix threads

    - by mikip
    Hi I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process. When thread B has finished it should inform thread A that it has finished. How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take. Thanks

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Simple Emacs keybindings

    - by User1
    I have two operations that I do all the time in Emacs: Create a new buffer and paste the clipboard. [C-S-n] Close the current buffer. [C-S-w] Switch to the last viewed buffer [C-TAB] I feel like a keyboard acrobat when doing the first two operations. I think it would be worth trying some custom keybindings and macros. A few questions about this customization: How would I make a macro for #1? Are these good keybindings (i know this is a bit subjective, but they might be used by something popular that I don't use) Has anyone found a Ctrl-Tab macro that will act like Alt-Tab in Linux/Windows? Specifically, I want it have a stack of buffers according to the last viewed timestamp (most recent on top). I want to continue cycling through the stack until I let go of the ctrl key. When the ctrl key is released, I want the current buffer to get an updated position on the stack.

    Read the article

  • OpenGL index buffer object with additional data

    - by muksie
    I have a large set of lines, which I render from a vertex buffer object using glMultiDrawArrays(GL_LINE_STRIP, ...); This works perfectly well. Now I have lots of vertex pairs which I also have to visualize. Every pair consists of two vertices on two different lines, and the distance between the vertices is small. However, I like to have the ability to draw a line between all vertex pairs with a distance less than a certain value. What I like to have is something like a buffer object with the following structure: i1, j1, r1, i2, j2, r2, i3, j3, r3, ... where the i's and j's are indices pointing to vertices and the r's are the distances between those vertices. Thus every vertex pair is stored as a (i, j, r) tuple. Then I like to have a (vertex) shader which only draws the vertex pairs with r < SOME_VALUE as a line. So my question is, what is the best way to achieve this?

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • Assigning unsigned char* buffer to a string

    - by CPPChase
    This question might be asked before but I couldn't find exactly what I need. My problem is, I have a buffer loaded by data downloaded from a webservice. The buffer is in unsigned char* form in which there is no '\0' at the end. Then I have a poco xml parser needs a string. I tried assigning it to string but now I realized it would cause problem such as leaking. here is the code: DOMParser::DOMParser(unsigned char* consatData, int consatDataSize, unsigned char* lagData, int lagDataSize) { Poco::XML::DOMParser parser; std::string consat; consat.assign((const char*) consatData, consatDataSize); pDoc = parser.parseString(consat); ParseConsat(); } Poco xml parser does have a ParseMemory which need a const char* and size of data but for some reason it just gives me segmentation fault. So I think it's safer to turn it to string. Thanks in advance.

    Read the article

  • Using DynamicVertexBuffer in XNA 4.0

    - by Bevin
    I read about DynamicVertexBuffer, and how it's supposed to be better for data that changes often. I have a world built up by cubes, and I need to store the cubes' vertices in this buffer to draw them to the screen. However, not all cubes have vertices (some are air, which is transparent) and not all faces of the cubes need to be drawn either (they are facing each other), so how do I keep track of what vertices are stored where in the buffer? Also, certain faces need to be drawn last, namely the ones with transparency in them (like glass or leaves), and these faces also need to be drawn in a back-to-front order to not mess up the alpha blending. If all of these vertices are stored arbitrarily in this buffer, how do I know what vertices are where? Also, the number of vertices can change, but the DynamicVertexBuffer doesn't seem very dynamic to me, since I can't change it's size at all. Do I have to recreate the buffer every time I need to add or remove faces?

    Read the article

  • Zsh: how to see all buffers?

    - by HH
    You can push things to buffer with ^Q and pop them with ESC-g. Alt+x vi-set-buffer changes buffer somehow. How can I see all the buffers? They are probably some files to look at.

    Read the article

  • Tie destruction of an object (sealed) to destruction of an unmanaged buffer

    - by testtestSO
    I'll explain my situation first: I'm interested of using the Bitmap constructor that takes scan0, stride and format, because I'm decoding tiled images and I'd like to choose my own stride so I can decode the tiles without caring about the bounds in the decoder part. Anyway, the problem is that the documentation says: The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter. However, the memory should not be released until the related Bitmap is released. I can't release the buffer easily, because the Bitmap is then passed to another class that will eventually destroy it and I don't have control over it. Is there some way (hacky, I know) to tell the GC to also release my buffer when the Bitmap is destroyed? (Also, any alternative solution is welcome).

    Read the article

  • OpenGL 3.0+ framebuffer to texture/images

    - by user827992
    I need a way to capture what is rendered on screen, i have read about glReadPixels but it looks really slow. Can you suggest a more efficient or just an alternative way for just copying what is rendered by OpenGL 3.0+ to the local RAM and in general to output this in a image or in a data stream? How i can achieve the same goal with OpenGL ES 2.0 ? EDIT: i just forgot: with this OpenGL functions how i can be sure that I'm actually reading a complete frame, meaning that there is no overlapping between 2 frames or any nasty side effect I'm actually reading the frame that comes right next to the previous one so i do not lose frames

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >