Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 2/191 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Open a buffer as a vertical split in VIM

    - by alfredodeza
    If you are editing a file in VIM and then you need to open an existing buffer (e.g. from your buffer list: :buffers) how can you open it in a vertical split? I know that you already can open it with a normal split like: :sbuffer N Wehere N is the buffer number you want, however, the above opens that N buffer horizontally, not vertically. I'm also aware that you can change the window placement after opening and have a Vertical Split like so: Ctrl-W H Ctrl-W L Which will vertically split the window to the right or the left. It seems to me that if there is a sbuffer there should be a vsbuffer but that doesn't exist (not that I am aware of) Also, please note that I am not looking for a plugin to solve this question. I know about a wealth of plugins that will allow you to do this. I am sure I might be missing something that is already there. EDIT: In the best spirit of collaboration, I have created a simple Function with a Mapping if someone else stumbles across this issue and do not want to install a plugin: Function: " Vertical Split Buffer Function function VerticalSplitBuffer(buffer) execute "vert belowright sb" a:buffer endfunction Mapping: " Vertical Split Buffer Mapping command -nargs=1 Vbuffer call VerticalSplitBuffer(<f-args>) This accomplishes the task of opening a buffer in a right split, so for buffer 1, you would call it like: :Vbuffer 1

    Read the article

  • Buffer Overflow (vs) Buffer OverRun (vs) Stack Overflow [closed]

    - by joe
    Possible Duplicate: What is the difference between a stack overflow and buffer overflow ? What is the difference between Buffer Overflow and Buffer Overrun? What is the difference between Buffer Overrun and Stack Overflow? Please include code examples. I have looked at the terms in Wikipedia, but I am unable to match with programming in C or C++ or Java.

    Read the article

  • Accessing memory buffer after fread()

    - by xiongtx
    I'm confused as to how fread() is used. Below is an example from cplusplus.com /* fread example: read a complete file */ #include <stdio.h> #include <stdlib.h> int main () { FILE * pFile; long lSize; char * buffer; size_t result; pFile = fopen ( "myfile.bin" , "rb" ); if (pFile==NULL) {fputs ("File error",stderr); exit (1);} // obtain file size: fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); // allocate memory to contain the whole file: buffer = (char*) malloc (sizeof(char)*lSize); if (buffer == NULL) {fputs ("Memory error",stderr); exit (2);} // copy the file into the buffer: result = fread (buffer,1,lSize,pFile); if (result != lSize) {fputs ("Reading error",stderr); exit (3);} /* the whole file is now loaded in the memory buffer. */ // terminate fclose (pFile); free (buffer); return 0; } Let's say that I don't use fclose() just yet. Can I now just treat buffer as an array and access elements like buffer[i]? Or do I have to do something else?

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Java OpenGL color material darkens when depth test is disabled

    - by Jeff Storey
    I've been working with the depth buffer in OpenGL (JOGL) to ensure certain items are rendered in front of others by disabling the depth buffer (detailed in my previous question http://stackoverflow.com/questions/2516086/java-opengl-saving-depth-buffer). This works, except when I set the color of an item that is being drawn when the depth test is disabled, none of the material shininess shows up. The item is rendered as a darker version of the original color (it seems like there is no lighting effect really applied to it). Is there a reason why this would be happening? and how might I prevent this? thanks, Jeff

    Read the article

  • Depth First Search Basics

    - by cam
    I'm trying to improve my current algorithm for the 8 Queens problem, and this is the first time I'm really dealing with algorithm design/algorithms. I want to implement a depth-first search combined with a permutation of the different Y values described here: http://en.wikipedia.org/wiki/Eight_queens_puzzle#The_eight_queens_puzzle_as_an_exercise_in_algorithm_design I've implemented the permutation part to solve the problem, but I'm having a little trouble wrapping my mind around the depth-first search. It is described as a way of traversing a tree/graph, but does it generate the tree graph? It seems the only way that this method would be more efficient only if the depth-first search generates the tree structure to be traversed, by implementing some logic to only generate certain parts of the tree. So essentially, I would have to create an algorithm that generated a pruned tree of lexigraphic permutations. I know how to implement the pruning logic, but I'm just not sure how to tie it in with the permutation generator since I've been using next_permutation. Is there any resources that could help me with the basics of depth first searches or creating lexigraphic permutations in tree form?

    Read the article

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • Java HTTP Requests Buffer size

    - by behrk2
    Hello, I have an HTTP Request Dispatcher class that works most of the time, but I had noticed that it "stalls" when receiving larger requests. After looking into the problem, I thought that perhaps I wasn't allocating enough bytes to the buffer. Before, I was doing: byte[] buffer = new byte[10000]; After changing it to 20000, it seems to have stopped stalling: String contentType = connection.getHeaderField("Content-type"); ByteArrayOutputStream baos = new ByteArrayOutputStream(); InputStream responseData = connection.openInputStream(); byte[] buffer = new byte[20000]; int bytesRead = responseData.read(buffer); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); bytesRead = responseData.read(buffer); } baos.close(); connection.close(); Am I doing this right? Is there anyway that I can dynamically set the number of bytes for the buffer based on the size of the request? Thanks...

    Read the article

  • mysql to get depth of record, count parent and ancestor records

    - by Nate
    Hey All, Say I have a post table containing the fields post_id and parent_post_id. I want to return every record in the post table with a count of the "depth" of the post. By depth, I mean, how many parent and ancestor records exist. Take this data for example... post_id parent_post_id ------- -------------- 1 null 2 1 3 1 4 2 5 4 The data represents this hierarchy... 1 |_ 2 | |_ 4 | |_ 5 |_ 3 The result of the query should be... post_id depth ------- ----- 1 0 2 1 3 1 4 2 5 3 Thanks in advance!

    Read the article

  • Intelligent "Subtraction" of one text logfile from another

    - by Vi
    Example: Application generates large text log file A with many different messages. It generates similarly large log file B when does not function correctly. I want to see what messages in file B are essentially new, i.e. to filter-out everything from A. Trivial prototype is: Sort | uniq both files Join files sort | uniq -c grep -v "^2" This produces symmetric difference and inconvenient. How to do it better? (including non-symmetric difference and preserving of messages order in B) Program should first analyse A and learn which messages are common, then analyse B showing with messages needs attention. Ideally it should automatically disregard things like timestamps, line numbers or other volatile things. Example. A: 0:00:00.234 Received buffer 0x324234 0:00:00.237 Processeed buffer 0x324234 0:00:00.238 Send buffer 0x324255 0:00:03.334 Received buffer 0x324255 0:00:03.337 Processeed buffer 0x324255 0:00:03.339 Send buffer 0x324255 0:00:05.171 Received buffer 0x32421A 0:00:05.173 Processeed buffer 0x32421A 0:00:05.178 Send buffer 0x32421A B: 0:00:00.134 Received buffer 0x324111 0:00:00.137 Processeed buffer 0x324111 0:00:00.138 Send buffer 0x324111 0:00:03.334 Received buffer 0x324222 0:00:03.337 Processeed buffer 0x324222 0:00:03.338 Error processing buffer 0x324222 0:00:03.339 Send buffer 0x3242222 0:00:05.271 Received buffer 0x3242FA 0:00:05.273 Processeed buffer 0x3242FA 0:00:05.278 Send buffer 0x3242FA 0:00:07.280 Send buffer 0x3242FA failed Result: 0:00:03.338 Error processing buffer 0x324222 0:00:07.280 Send buffer 0x3242FA failed One of ways of solving it can be something like that: Split each line to logical units: 0:00:00.134 Received buffer 0x324111,0:00:00.134,Received,buffer,0x324111,324111,Received buffer, \d:\d\d:\d\d\.\d\d\d, \d+:\d+:\d+.\d+, 0x[0-9A-F]{6}, ... It should find individual words, simple patterns in numbers, common layouts (e.g. "some date than text than number than text than end_of_line"), also handle combinations of above. As it is not easy task, user assistance (adding regexes with explicit "disregard that","make the main factor","don't split to parts","consider as date/number","take care of order/quantity of such messages" rules) should be supported (but not required) for it. Find recurring units and "categorize" lines, filter out too volatile things like timestamps, addresses or line numbers. Analyse the second file, find things that has new logical units (one-time or recurring), or anything that will "amaze" the system which has got used to the first file. Example of doing some bit of this manually: $ cat A | head -n 1 0:00:00.234 Received buffer 0x324234 $ cat A | egrep -v "Received buffer" | head -n 1 0:00:00.237 Processeed buffer 0x324234 $ cat A | egrep -v "Received buffer|Processeed buffer" | head -n 1 0:00:00.238 Send buffer 0x324255 $ cat A | egrep -v "Received buffer|Processeed buffer|Send buffer" | head -n 1 $ cat B | egrep -v "Received buffer|Processeed buffer|Send buffer" 0:00:03.338 Error processing buffer 0x324222 0:00:07.280 Send buffer 0x3242FA failed This is a boring thing (there are a lot of message types); also I can accidentally include some too broad pattern. Also it can't handle complicated things like interrelation between messages. I know that it is AI-related. May be there are already developed tools?

    Read the article

  • OpenGL ES depth buffer

    - by Istvan
    Hi! I was wondering if I can deallocate the depth buffer in iPhone OpenGL ES to conserve memory? Or it stays until the application finishes? I only need the depth testing in the beginning of the application.

    Read the article

  • How to determine the used size of device associated's buffer

    - by dubbaluga
    Hi, when mounting a device without the "sync" option, e. g. by invoking the following: mount -o async /dev/sdc1 /mnt a buffer is associated with a device to optimize (speed) read/write operations. Is there a way to determine the size of this buffer? Another question that comes into my mind is, if it's possible to find out how much of it is used currently. This can be interesting to determine the time it would take to "sync" or "umount" slow devices, such as flash-based media. Thanks in advance for your answers, Rainer

    Read the article

  • Switching to some emacs shell buffers moves the cursor to the beginning of the buffer

    - by yuvilio
    I run Emacs 24 with prelude and a few shells that i invoke at the start ( e.g.: (shell "*shell*_spare") ). When i switch to some of them (C-x b), my cursor lands at the beginning of the buffer, rather than when it last left off (typically the end of the buffer after the last command I ran). The strange thing is that this does not happen for all the shell buffers that I set up in the same way but with different names. When I switch to them, the cursor is where it last left off. Any ideas how I can make the cursor always be where it last was or at the bottom?

    Read the article

  • Illustration of buffer overflows for students (linux, C)

    - by osgx
    Hello My friend is teacher of first-year CS students. We want to show them buffer overflow exploitation. But modern distribs are protected from simples buffer overflows: HOME=`perl -e "print 'A'x269"` one_widely_used_utility_is_here --help on debian (blame it) Caught signal 11, on modern commercial redhat *** buffer overflow detected ***: /usr/bin/one_widely_used_utility_is_here terminated ======= Backtrace: ========= /lib/libc.so.6(__chk_fail+0x41)[0xc321c1] /lib/libc.so.6(__strcpy_chk+0x43)[0xc315e3] /usr/bin/one_widely_used_utility_is_here[0x805xxxc] /usr/bin/one_widely_used_utility_is_here[0x804xxxc] /lib/libc.so.6(__libc_start_main+0xdc)[0xb61e9c] /usr/bin/one_widely_used_utility_is_here[0x804xxx1] ======= Memory map: ======== 00336000-00341000 r-xp 00000000 08:02 2751047 /lib/libgcc_s-4.1.2-20080825.so.1 00341000-00342000 rwxp 0000a000 08:02 2751047 /lib/libgcc_s-4.1.2-20080825.so.1 008f3000-008f4000 r-xp 008f3000 00:00 0 [vdso] The same detector fails for more synthetic examples from the internet. How can we demonstrate buffer overflow with modern non-GPL distribs (there is no debian in classes) How can we DISABLE canary word checking in stack ? DISABLE checking variants of strcpy/strcat ? write an example (in plain C) with working buffer overrun ?

    Read the article

  • How to prevent buffer overflow in C/C++?

    - by alexpov
    Hello, i am using the following code to redirect stdout to a pipe, then read all the data from the pipe to a buffer. I have 2 problems: first problem: when i send a string (after redirection) bigger then the pipe's BUFF_SIZE, the program stops responding (deadlock or something). second problem: when i try to read from a pipe before something was sent to stdout. I get the same response, the program stops responding - _read command stuck's ... The issue is that i don't know the amount of data that will be sent to the pipe after the redirection. The first problem, i don't know how to handle and i'll be glad for help. The second problem i solved by a simple workaround, right after the redirection i print space character to stdout. but i guess that this solution is not the correct one ... #include <fcntl.h> #include <io.h> #include <iostream> #define READ 0 #define WRITE 1 #define BUFF_SIZE 5 using namespace std; int main() { int stdout_pipe[2]; int saved_stdout; saved_stdout = _dup(_fileno(stdout)); // save stdout if(_pipe(stdout_pipe,BUFF_SIZE, O_TEXT) != 0 ) // make a pipe { exit(1); } fflush( stdout ); if(_dup2(stdout_pipe[1], _fileno(stdout)) != 0 ) //redirect stdout to the pipe { exit(1); } ios::sync_with_stdio(); setvbuf( stdout, NULL, _IONBF, 0 ); //anything sent to stdout goes now to the pipe //printf(" ");//workaround for the second problem printf("123456");//first problem char buffer[BUFF_SIZE] = {0}; int nOutRead = 0; nOutRead = _read(stdout_pipe[READ], buffer, BUFF_SIZE); //second problem buffer[nOutRead] = '\0'; // reconnect stdout if (_dup2(saved_stdout, _fileno(stdout)) != 0 ) { exit(1); } ios::sync_with_stdio(); printf("buffer: %s\n", buffer); } Thanks, Alex

    Read the article

  • BitBlting multiple images to buffer

    - by Anonymous
    So I've made a class which draws a transparant image to a buffer. the buffer is a HDC which has been used blackness on. What I am trying to do is draw three images to this buffer. Which means I am using this function three times. After that's done, I output it to the screen (using SRCCOPYing the buffer). But what I get to see is just the third image and blackness. void draw_buffer(HDC buffer, int draw_x, int draw_y) { BitBlt(this-main, draw_x, draw_y, this-img_width, this-img_height, this-image, this-mask_x, this-mask_y, SRCAND); BitBlt(this-main, draw_x, draw_y, this-img_width, this-img_height, this-image, this-img_x, this-img_y, SRCPAINT); BitBlt(buffer, 0, 0, 800, 600, this-main, 0, 0, SRCCOPY); } At initiation, this-main becomes this: this->main = CreateCompatibleDC(GetDC(0)); this->bitmap = CreateCompatibleBitmap(GetDC(0),800,600); SelectObject(this->main, this->bitmap); What is wrong with my code?

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • python multithread "maximum recursion depth exceed"

    - by user293487
    I use Python multithread to realize Quicksort. Quicksort is implement in a function. It is a recursive function. Each thread calls Quicksort to sort the array it has. Each thread has its own array that stores the numbers needs to be sorted. If the array size is smaller (<10,000). It runs ok. However, if the array size is larger, it shows the "maximum recursion depth exceed". So, I use setrecursionlimit () function to reset the recursion depth to 1500. But the program crash directly...

    Read the article

  • Calculating depth and descendants of tree

    - by yuudachi
    Can you guys help me with the algorithm to do these things? I have preorder, inorder, and postorder implemented, and I am given the hint to traverse the tree with one of these orders. I am using dotty to label (or "visit") the nodes. Depth is the number of edges from the root to the bottom leaf, so everytime I move, I add +1 to the depth? Something like that? No idea about the algorithm for descendants. They are asking about the number of nodes a specific node has under itself. These are normal trees btw.

    Read the article

  • OpenGLES iPhone Depth Blending problem

    - by user359103
    Hi, I am trying to make an OpenGLES 2.0 cube application. The idea was to have a texture (with an alpha 75%) applied to all 6 faces of the cube. This would mean that even if I rotate the cube i would be able to see all 6 faces at any given frame. Now I have enabled depth test(my app needs this!!) and blending. The Depth func is LEQUAL and blend func is SRC_ALPHA, ONE_MINUS_SRC_ALPHA. Now, the issue is that at some cube faces don't show the underlying faces. I am not able to understand this because the logic works fine with the other cube faces. Just for the record, I have disabled CULL_FACE. Thanks in advance. Regards, Puzzler

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >