Search Results

Search found 3117 results on 125 pages for 'z buffer'.

Page 10/125 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • Network Error: no buffer space available

    - by braindump
    After some time of running fine, one of our Windows XP SP3 machines does not open some(!) new TCP/IP connections anymore. Putty says "Network Error: no buffer space available", IE won't open any new connections but e.g. network drive mappings still work, even new ones can be established. netstat does not show more open connections that usual, ping and dns lookups work fine. Any hints?

    Read the article

  • ActiveSync / Exchange 2007 password expiration buffer on device

    - by Matt Hamende
    I'm trying to determine if there is any buffer of time from the time a password expires in AD to the time that users would stop receiving email on their mobile devices our setup is Exchange 2007 ActiveSync DC's are Server 2008 R2 primarily Android shop, with maybe a few iOS devices I've heard some rumors of people still receiving email after their password expired / changed on the domain, just want to see if anyone else has ever heard of this. Did a bit more reading, read about Token Cache in IIS 7.0 and 15min lagtime, still would like to hear any thoughts about this.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    Hi, I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • socket() failed: No buffer space available) while connecting to upstream,

    - by alfish
    On my ubuntu 10.04 VPS, I get regular 500 error on nginx (0.7.??)+ fcgi web server running a durpal site. and when I trace the nginx error log I see plenty of these: socket() failed: No buffer space available) while connecting to upstream ..., I have tried differnt combination configs but none fixed the problem. Currently I have 3 nginx workers, Keep-alive time out 15 seconds and and PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=1000 I really appreciate if you Can you suggest a solution to this annoying problem.

    Read the article

  • java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small

    - by jack
    Hi I got an email from a user when he sees the following error output when he's using our web site. java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "WEB_OWNER.SSFP_GET_WE_OBJ", line 300 ORA-06512: at line 1 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:137) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:315) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:281) This is error from oracle webconnect, Oracle Application Server Containers for J2EE 10g (10.1.2.3.0). Any idea?

    Read the article

  • Buffer requests in nginx while a symlink switches on backend

    - by Quintin Par
    In a release deployment I would like to buffer client requests that come to nginx(in reverse proxy) mode to be buffered for possibily 1-2 seconds while a pdsh request is sent to switch symlinks on the back end server to /var/www/html/current . After the switch is complete, I would want to release the buffering while avoiding a herd clash. Is this possible in nginx? Can someone help? Edit: The idea is not to loose requests and from nginx forums I've come to know that retries can sometimes results in CPU spins

    Read the article

  • MCE7 how to get video from livetv buffer

    - by Vnuk
    I'm watching something on my MCE and I can jump back to see something. How can I (if at all) extract a piece of that video that is recorded on my drive somewhere? If I press record it starts recording from now and past live buffer is lost. Is there a plugin that can do this?

    Read the article

  • Reverse Proxy that does not buffer uploads

    - by tsuraan
    From what I've seen of various reverse proxies (nginx, apache, varnish), they seem to buffer file uploads to disk before handing them off to the service they're proxying for. I need a reverse proxy that doesn't do this; I have a system that handles uploads itself, and buffering uploaded files to disk is not something that works for me. Does anybody know of a proxy server that can be configured to just pass traffic through to the proxied services without doing any buffering to disk?

    Read the article

  • Python: convert buffer type of SQLITE column into string

    - by Volatil3
    I am new to Python 2.6. I have been trying to fetch date datetime value which is in yyyy-mm-dd hh:m:ss format back in my Python program. On checking the column type in Python I get the error: 'buffer' object has no attribute 'decode'. I want to use the strptime() function to split the date data and use it but I can't find how to convert a buffer to string. The following is a sample of my code (also available here): conn = sqlite3.connect("mrp.db.db", detect_types=sqlite3.PARSE_DECLTYPES) cursor = conn.cursor() qryT = """ SELECT dateDefinitionTest FROM t WHERE IDproject = 4 AND IDstatus = 5 ORDER BY priority, setDate DESC """ rec = (4,4) cursor.execute(qryT,rec) resultsetTasks = cursor.fetchall() cursor.close() # closing the resultset for item in resultsetTasks: taskDetails = {} _f = item[10].decode("utf-8") The exception I get is: 'buffer' object has no attribute 'decode'

    Read the article

  • Visualize the depth buffer

    - by Thanatos
    I'm attempting to visualize the depth buffer for debugging purposes, by drawing it on top of the actual rendering when a key is pressed. It's mostly working, but the resulting image appears to be zoomed in. (It's not just the original image, in an odd grayscale) Why is it not the same size as the color buffer? This is what I'm using the view the depth buffer: void get_gl_size(int &width, int &height) { int iv[4]; glGetIntegerv(GL_VIEWPORT, iv); width = iv[2]; height = iv[3]; } void visualize_depth_buffer() { int width, height; get_gl_size(width, height); float *data = new float[width * height]; glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data); glDrawPixels(width, height, GL_LUMINANCE, GL_FLOAT, data); delete [] data; }

    Read the article

  • realloc()ing memory for a buffer used in recv()

    - by Hristo
    I need to recv() data from a socket and store it into a buffer, but I need to make sure get all of the data so I have things in a loop. So to makes sure I don't run out of room in my buffer, I'm trying to use realloc to resize the memory allocated to the buffer. So far I have: // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, 4096 + sizeof(pageContentBuffer)); } However, this doesn't seem to be working properly since Valgrind is complaining "valgrind: the 'impossible' happened:". Any advice as to how this should be done properly? Thanks, Hristo

    Read the article

  • Vim: Delete Buffer When Quitting Split Window

    - by Rafid K. Abdullah
    I have this very useful function in my .vimrc: function! MyGitDiff() !git cat-file blob HEAD:% > temp/compare.tmp diffthis belowright vertical new edit temp/compare.tmp diffthis endfunction What it does is basically opening the file I am currently working on from repository in a vertical split window, then compare with it. This is very handy, as I can easily compare changes to the original file. However, there is a problem. After finishing the compare, I remove the split window by typing :q. This however doesn't remove the buffer from the buffer list and I can still see the compare.tmp file in the buffer list. This is annoying because whenever I make new compare, I get this message: Warning: File "temp/compare.tmp" has changed since editing started. Is there anyway to delete the file from buffers as well as closing the vertical split window?

    Read the article

  • About data size filled in the buffer

    - by Bohan Lu
    I need low-latency audio in my project, and I know Android 2.3 supports OpenSL ES. I have read documents and sample code and I decide to use Android simple buffer queue to do the play and record. I now try to write a simple application to do the test. However, I have some questions about recording. If I set the recorder stop when it is recording, how do I know the exact number of bytes filled in the last buffer if it is not filled up ? In 1.1 version, the callback function has some parameters about buffer and its filled data, but there is no such parameters in version 1.0.1. Is there any way to get this information ? Any suggestion would be greatly appreciated !

    Read the article

  • Vim, how to scroll to bottom of a named buffer

    - by Gavin Black
    I have a vim-script which splits output to a new window, using the following command: below split +view foo I've been trying to find a way from an arbitrary buffer to scroll to the bottom of foo, or a setting to keep it defaulted to showing the bottom lines of the buffer. I'm doing most of this inside of a python block of vim script. So I have something like: python << endpython import vim import time import thread import sys def myfunction(string,sleeptime,*args): outpWindow = vim.current.window while 1: outpWindow.buffer.append("BAR") #vim.command("SCROLL TO BOTTOM OF foo") time.sleep(sleeptime) #sleep for a specified amount of time. vim.command('below split +view foo') thread.start_new_thread(myfunction,("Thread No:1",2)) endpython And need to find something to put in for vim.command("SCROLL TO BOTTOM of foo") line

    Read the article

  • OpenGL Vertex Array/Buffer Objects

    - by sadanjon
    Question 1 Do vertex buffer objects created under a certain VAO deleted once that VAO is deleted? An example: glGenBuffers(1, &bufferObject); glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, bufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(someVertices), someVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(positionAttrib); glVertexAttribPointer(positionAttrib, 3, GL_FLOAT, GL_FALSE, 0, NULL); When later calling glDeleteVertexArrays(1, &VAO);, will bufferObject be deleted as well? The reason I'm asking is that I saw a few examples over the web that didn't delete those buffer objects. Question 2 What is the maximum amount of memory that I can allocate for buffer objects? It must be system dependent of course, but I can't seem find an estimation for it. What happens when video RAM isn't big enough? How would I know?

    Read the article

  • Scripting vim to Run Perltidy on a Buffer

    - by rjray
    At my current job, we have coding-style standards that are different from the ones I normally follow. Fortunately, we have a canned RC file for perltidy that I can apply to reformat files before I submit them to our review process. I have code for emacs that I use to run a command over a buffer and replace the buffer with the output, which I have adapted for this. But I sometimes alternate between emacs and vim, and would like to have the same capabilities there. I'm sure that this or something similar is simple and had been done and re-done many times over. But I've not had much luck finding any examples of vim-script that seem to do what I need. Which is, in essence, to be able to hit a key combo (like Ctrl-F6, what I use in emacs) and have the buffer be reformatted in-place by perltidy. While I'm a comfortable vim-user, I'm completely clueless at writing this sort of thing for vim.

    Read the article

  • Allocate from buffer in C

    - by Grimless
    I am building a simple particle system and want to use a single array buffer of structs to manage my particles. That said, I can't find a C function that allows me to malloc() and free() from an arbitrary buffer. Here is some pseudocode to show my intent: Particle* particles = (Particle*) malloc( sizeof(Particle) * numParticles ); Particle* firstParticle = <buffer_alloc>( particles ); initialize_particle( firstParticle ); // ... Some more stuff if (firstParticle->life < 0) <buffer_free>( firstParticle ); // @ program's end free(particles); Where <buffer_alloc> and <buffer_free> are functions that allocate and free memory chunks from arbitrary pointers (possibly with additional metadata such as buffer length, etc.). Do such functions exist and/or is there a better way to do this? Thank you!

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • Why distant objects draw in front of close objects?

    - by cad
    I am rendering two cubes in the space using XNA 4.0 and the layering of objects only works from certain angles. Here is what I see from the front angle (everything ok) Here is what I see from behind This is my draw method. Cubes are drawn by serverManager and serverManager1 protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); switch (_gameStateFSM.State) { case GameFSMState.GameStateFSM.INTROSCREEN: spriteBatch.Begin(); introscreen.Draw(spriteBatch); spriteBatch.End(); break; case GameFSMState.GameStateFSM.GAME: spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend); // Text screenMessagesManager.Draw(spriteBatch, firstPersonCamera.cameraPosition, fpsHelper.framesPerSecond); // Camera firstPersonCamera.Draw(); // Servers serverManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); serverManager1.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); // Room //roomManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix); spriteBatch.End(); break; case GameFSMState.GameStateFSM.EXITGAME: break; default: break; } base.Draw(gameTime); fpsHelper.IncrementFrameCounter(); } serverManager and serverManager1 are instances of the same class ServerManager that draws a cube. The draw method for ServerManager is: public void Draw(GraphicsDevice graphicsDevice, Matrix viewMatrix, Matrix projectionMatrix) { cubeEffect.World = Matrix.CreateTranslation(modelPosition); // Set the World matrix which defines the position of the cube cubeEffect.View = viewMatrix; // Set the View matrix which defines the camera and what it's looking at cubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model cubeEffect.TextureEnabled = true; cubeEffect.Texture = cubeTexture; // Enable some pretty lights cubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in cubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } Obviously there is something I am doing wrong. Any hint of where to look? (Maybe z-buffer or occlusion tests?)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >