Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 15/191 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Dumping a Linux console scrollback buffer?

    - by Gerald Combs
    We would like to save the output of a program run on a Linux console which spans many lines. Unfortunately it wasn't logged or run under screen, or any other way that lets us easily capture the output. The best method we've been able to come up with so far is: Log into the machine via a separate SSH session In the console session, page to the top of the buffer Repeat: In the SSH session, run "cat /dev/vcs >> screendump.txt" In the console session, page down one screen Dump the final screen in the SSH session Is there a better way? It seems like if the VC memory were contiguous and you knew where it was you could use dd to pull the console text directly out of kernel memory and into a file.

    Read the article

  • Installing vim7 on Solaris Sparc 2.6 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance

    Read the article

  • Notepad++ setting to clear the undo buffer on save

    - by J Jones
    Hello, I've noticed that Notepad++ is clearing the undo buffer when I save a file. This just started happening when I updated the editor to version 5.6.8. (previous version was pretty old... 5.3.? perhaps...) I've seen one reference out there (look at last bullet of answer) that this might be a setting I can change. But for the life of me, I can't find it. Anyone familiar with this?

    Read the article

  • TeamViewer: How to disable color dithering for low-bit-depth screen settings

    - by gogowitsch
    I am using Terminal Services and TeamViewer a lot to access other computers, partly over slow networks. The problem described below is not affected by which of the two remote access services I am using. When accessing Windows 7 Professional machines, a great deal of text is hard to read as the background is dithered. Even for exactly the same colors, Windows 2003 does not seem to dither at all, but to choose the closest available color. I strongly prefer the latter, as I don't care for the exact colors, I just want to be able to read easily. I am not sure whether this is operating system-related. The programs on the remote systems do not allow me to change the color choices for the various backgrounds to anything sane. Is there a way to disable this color dithering using some target operating system setting that will do the trick for both Terminal Services and TeamViewer?

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • ext4 loopback device, Buffer I/O Error on reboot

    - by cvb
    I am trying to mount a loopback device on my ext4 formatted ssd drive. I get these errors when I reboot on Linux kernel 2.6.38.8 Buffer I/O error on device loop1, logical block 0 Here is what I do: dd if=/dev/zero of=/mnt/s/lodev bs=4096 count=250000 mkfs.ext4 /mnt/s/lodev mount -n -o loop,rw /mnt/s/lodev /mnt/test The loopback mount is successful, but on reboot I get errors as mentioned above. Even mouting with 'sync','data=writeback' does not help. I tried to losetup a device, but see the same behavior. I also reformatted the base device and created the loopback device and mounted as above, I still see these errors. I do not see them when I format them as vfat. Appreciate any suggestions on this problem.

    Read the article

  • OpenGL-ES Texture Mapping. Texture is reversed?

    - by Feet
    I am trying to get my head around Texture mapping, I thought I had it the other day after asking this. However, I am having some trouble with my texture coordinates being flipped from what I am expecting. I am loading my texture like so int[] textures = new int[1]; gl.glGenTextures(1, textures, 0); _textureID = textures[0]; gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureID); Bitmap bmp = BitmapFactory.decodeResource( _context.getResources(), R.drawable.die_1); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); bmp.recycle(); My cube is this float vertices[] = { // Front face -width, -height, depth, // 0 width, -height, depth, // 1 width, height, depth, // 2 -width, height, depth, // 3 // Back Face width, -height, -depth, // 4 -width, -height, -depth, // 5 -width, height, -depth, // 6 width, height, -depth, // 7 // Left face -width, -height, -depth, // 8 -width, -height, depth, // 9 -width, height, depth, // 10 -width, height, -depth, // 11 // Right face width, -height, depth, // 12 width, -height, -depth, // 13 width, height, -depth, // 14 width, height, depth, // 15 // Top face -width, height, depth, // 16 width, height, depth, // 17 width, height, -depth, // 18 -width, height, -depth, // 19 // Bottom face -width, -height, -depth, // 20 width, -height, -depth, // 21 width, -height, depth, // 22 -width, -height, depth, // 23 }; short indices[] = { // Front 0,1,2, 0,2,3, // Back 4,5,6, 4,6,7, // Left 8,9,10, 8,10,11, // Right 12,13,14, 12,14,15, // Top 16,17,18, 16,18,19, // Bottom 20,21,22, 20,22,23, }; float texCoords[] = { // Front face textured only, for simplicity 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f}; And it is drawn like so // Counter-clockwise winding. gl.glFrontFace(GL10.GL_CCW); // Enable face culling. gl.glEnable(GL10.GL_CULL_FACE); // What faces to remove with the face culling. gl.glCullFace(GL10.GL_BACK); // Enabled the vertices buffer for writing and to be used during // rendering. gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); // Specifies the location and data format of an array of vertex // coordinates to use when rendering. gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer); if (normalsBuffer != null) { // Enabled the normal buffer for writing and to be used during rendering. gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); // Specifies the location and data format of an array of normals to use when rendering. gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); } // Set flat color gl.glColor4f(rgba[0], rgba[1], rgba[2], rgba[3]); // Smooth color if ( colorBuffer != null ) { // Enable the color array buffer to be used during rendering. gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // Point out the where the color buffer is. gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer); } // Use textures? if ( textureBuffer != null) { gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); } // Translation and rotation before drawing gl.glTranslatef(x, y, z); gl.glRotatef(rx, 1, 0, 0); gl.glRotatef(ry, 0, 1, 0); gl.glRotatef(rz, 0, 0, 1); gl.glDrawElements(GL10.GL_TRIANGLES, numOfIndices, GL10.GL_UNSIGNED_SHORT, indicesBuffer); // Disable the vertices buffer. gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); gl.glDisableClientState(GL10.GL_NORMAL_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Disable face culling. gl.glDisable(GL10.GL_CULL_FACE); However my front face looks like this I also add, I haven't got any normals set, are textures affected by normals? float texCoords[] = { // Front 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f} It seems as if the texture is being flipped, so the coordinates don't match up properly?

    Read the article

  • performing simple buffer overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

  • Redirecting exec output to a buffer or file

    - by devin
    I'm writing a C program where I fork(), exec(), and wait(). I'd like to take the output of the program I exec'ed to write it to file or buffer. For example, if I exec ls I want to write file1 file2 etc to buffer/file. I don't think there is a way to read stdout, so does that mean I have to use a pipe? Is there a general procedure here that I haven't been able to find?

    Read the article

  • understanding z buffer formates direct x

    - by numerical25
    A z buffer is just a 3d array that shows what object should be written in front of another object. each element in the array represents a pixel that holds a value from 0.0 to 1.0. My question is if that is all a z buffer does, then why are some buffers 24bit, 32bit, and 16 bit ??

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Win32 C/C++ Load Image from memory buffer

    - by Bruno
    I want to load a image (.bmp) file on a Win32 application, but I do not want to use the standard LoadBitmap/LoadImage from Windows API: I want it to load from a buffer that is already in memory. I can easily load a bitmap directly from file and print it on the screen, but this issue is making me stuck :( What I'm looking for is a function that works like this: HBITMAP LoadBitmapFromBuffer(char* buffer, int width, int height); Thanks.

    Read the article

  • OpenGL depth buffer on Android

    - by kayahr
    I'm currently learning OpenGL ES programming on Android (2.1). I started with the obligatory rotating cube. It's rotating fine but I can't get the depth buffer to work. The polygons are always displayed in the order the GL commands render them. I do this during initialization of GL: gl.glClearColor(.5f, .5f, .5f, 1); gl.glShadeModel(GL10.GL_SMOOTH); gl.glClearDepthf(1f); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glDepthFunc(GL10.GL_LEQUAL); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); On surface-change I do this: gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100f); When I enable backface culling then everything looks correct. But backface culling is only a speed-optimization so it should also work with only the depth buffer or not? So what is missing here?

    Read the article

  • What is wrong with this recursive Windows CMD script? It won't do Ackermann properly

    - by boost
    I've got this code that I'm trying to get to calculate the Ackermann function so that I can post it up on RosettaCode. It almost works. I thought maybe there'd be a few batch file wizards on StackOverflow. ::echo off set depth=0 :ack if %1==0 goto m0 if %2==0 goto n0 :else set /a n=%2-1 set /a depth+=1 call :ack %1 %n% set t=%errorlevel% set /a depth-=1 set /a m=%1-1 set /a depth+=1 call :ack %m% %t% set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) :m0 set/a n=%2+1 if %depth%==0 ( exit %n% ) else ( exit /b %n% ) :n0 set /a m=%1-1 set /a depth+=1 call :ack %m% %2 set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) I use this script to test it @echo off cmd/c ackermann.cmd %1 %2 echo Ackermann of %1 %2 is %errorlevel% A sample output, for Test 1 1, gives: >test 1 1 >set depth=0 >if 1 == 0 goto m0 >if 1 == 0 goto n0 >set /a n=1-1 >set /a depth+=1 >call :ack 1 0 >if 1 == 0 goto m0 >if 0 == 0 goto n0 >set /a m=1-1 >set /a depth+=1 >call :ack 0 0 >if 0 == 0 goto m0 >set/a n=0+1 >if 2 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >if 1 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >set /a m=1-1 >set /a depth+=1 >call :ack 0 1 >if 0 == 0 goto m0 >set/a n=1+1 >if 1 == 0 (exit 2 ) else (exit /b 2 ) >set t=2 >set /a depth-=1 >if 0 == 0 (exit 2 ) else (exit /b 2 ) Ackermann of 1 1 is 2

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • [SOLVED] How to create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • How could I create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • PHP output buffer settings ignored by server

    - by Ecom Evolution
    I have been trying to flush the output of certain scripts to the browser on demand, but they do not work on our production server. For instance, I tried running the "Phoca Changing Collation tool" (find it on Google) and I don't see any output until the script finishes executing. I've tried immediately flushing the buffer on other scripts that work fine on any server but this one using the following code: echo "something"; ob_flush(); flush(); Setting "ob_implicit_flush(1);" doesn't help either. The server is Apache 2.2.21 with PHP 5.2.17 running on Linux. You can see our php.ini file here if that will help: http://www.smallfiles.org/download/1123/php.ini.html This isn't the only problem we are having with the server ignoring in-script directives. The server also ignores timeout coding such as: ini_set('max_execution_time', 900*60); AND set_time_limit(86400); Script always times out at the php.ini default. Doesn't seem to matter if script is executed in IE or Firefox. Tried "ini_set('zlib.output_compression_level', 'Off');" and checked that it is "Off" in the php.ini file. The code "apache_setenv('no-gzip', 1);" causes a fatal error so tried uploading a .htaccess file with the "mod_gzip_on No" directive. Neither helps. Tried running Apache as fcgi and suphp, but same results. Server is NOT in safe mode. Pullin ma hair out!

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • minimax depth first search game tree

    - by Arvind
    Hi I want to build a game tree for nine men's morris game. I want to apply minimax algorithm on the tree for doing node evaluations. Minimax uses DFS to evaluate nodes. So should I build the tree first upto a given depth and then apply minimax or can the process of building the tree and evaluation occur together in recursive minimax DFS? Thank you Arvind

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >