Search Results

Search found 816 results on 33 pages for 'buffers'.

Page 14/33 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Swap implication in Linux and way to increase it

    - by vimalnath
    I used top command to print this on Linux box: [root@localhost ~]# top top - 23:38:38 up 361 days, 12:16, 2 users, load average: 0.09, 0.06, 0.01 Tasks: 129 total, 2 running, 126 sleeping, 1 stopped, 0 zombie Cpu(s): 0.0% us, 0.2% sy, 0.0% ni, 96.5% id, 3.4% wa, 0.0% hi, 0.0% si Mem: 2074712k total, 1996948k used, 77764k free, 16632k buffers Swap: 1052248k total, 1052248k used, 0k free, 331540k cached I am not sure what Swap:0k free means in the last line. Is this normal behavior for a linux box to have value of 0 Thanks

    Read the article

  • How to measure TCP connection time in Linux

    - by Paul Draper
    I want to measure the overhead in creating a TCP connection. I know of many tools like hping and netperf, but they seem oriented at measuring latency. I want to know how long the 3-way handshake takes, and allocating any buffers, etc., and then closing it. So I want to open a real, legitimate TCP connection, and then close it. Are there any tools that will do that and help me measure performance?

    Read the article

  • Is there a way to "lock" the viewport in vim?

    - by breadjesus
    I recently started using Vim with NERDTree. The annoying thing is when I close the buffer, NERDTree expands to fill the rest of the screen, and I have to open another file and reopen NERDTree to get it back to the old layout. Is there a way to "lock" NERDTree in place? Ideally, closing a buffer would replace it with another buffer that's hidden, or open a new blank buffer if no other buffers are open. Thanks!

    Read the article

  • Pyglet: How to use second screen's vsync

    - by BaldDude
    does anybody know if it's possible to use the vsync of the second monitor instead of the first one with pyglet? I have 2 monitors, one running at 60Hz and the other at 120Hz. I want to be able to set my application on whatever monitors I have, and have the application use the monitor's rate to swap the buffers. This needs to be cross platform. I found this information... pyglet.window But I was wondering if anybody knows a way... Thanks for your help.

    Read the article

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • Can I use GLFW and GLEW together in the same code

    - by Brendan Webster
    I use the g++ compiler, which could be causing the main problem, but I'm using GLFW for window and input management, and I am using GLEW so that I can use OpenGL 3.x functionality. I loaded in models and then tried to make Vertex and Index buffers for the data, but it turned out that I kept getting segmentation faults in the program. I finally figured out that GLEW just wasn't working with GLFW included. Do they not work together? Also I've done the context creation through GLFW so that may be another factor in the problem.

    Read the article

  • REST or Non-REST on Internal Services

    - by tyndall
    I'm curious if others have chosen to implement some services internally at their companies as non-REST (SOAP, Thrift, Proto Buffers, etc...) as a way to auto-generate client libraries/wrappers? I'm on a two year project. I will be writing maybe 40 services over that period with my team. 10% of those services definitely make sense as REST services, but the other 90% feel more like they could be done in REST or RPC style. Of these 90%, 100% will be .NET talking to .NET. When I think about all the effort to have my devs develop client "wrappers" for REST services I cringe. WADL or RSDL don't seem to have enough mindshare. Thoughts? Any good discussions of this "internal service" issue online? If you have struggled with this what general rules for determining REST or non-REST have you used?

    Read the article

  • How to properly render a Frame Buffer to the BackBuffer in Stage3D / AGAL

    - by bigp
    After doing a render pass with RenderToTarget (RTT), how do you properly render that texture buffer to the screen while maintaining original scale / proportions so it doesn't stretch or lose quality? Can an AGAL VertexShader & FragmentShader be written so it's adaptable to any Texture size and Viewport dimensions? I find I'm getting some "blocky" effects in some of my first attempts at "ping-ponging" between two Texture buffers (to create trailing effects). Perhaps I'm not using the UVs correctly between the rendering-to-target and/or the backbuffer? Is there a simpler way just to "splash" the texture on the backbuffer, or is a Quad absolutely necessary (4 vertices, 2 triangles)? If it needs the Quad, should the Texture buffer be fully drawn (0.0 to 1.0 for vertical and horizontal UVs), or only a percentage of it should, like the example below? Texture Buffer U: 0.0 to viewport.width/texturebuffer.width; Texture Buffer V: 0.0 to viewport.height/texturebuffer.height; Thanks!

    Read the article

  • Using raw vertex information for sprites rather than SpriteBatch in XNA

    - by The Communist Duck
    I have been wondering whether using SpriteBatch is the best option. Obviously for prototyping or small games it works well. However, I've been wanting to apply techniques such as shaders and lighting to my game. I know you can use shaders to some extent with SpriteSortMode.Immediate, but I'm not sure if you lose power using that. The other major thing is that you cannot store your vertex data in the graphics memory with buffers. In summary, is there an advantage of using VertexTextureNormal (or whatever they're called) structs for vertex data for 2D sprites, or should I stick with SpriteBatch, provided I wish to use shaders?

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Swap partition not recognized (The disk drive with UUID=... is not ready yet or not present)

    - by ladaghini
    I think I had an encrypted swap partition, because I chose to encrypt my home directory during the installation. I believe that's what the line with /dev/mapper/cryptswap1 ... in my /etc/fstab is all about. I did something to bork my swap because on the next boot, I got a message (paraphrased): The disk drive for /dev/mapper/cryptswap1 is not ready yet or not present. Wait to continue. Press S to skip or M to manually recover. (As a side note, pressing S or M seemed to do nothing different from just waiting.) Here's what I've tried: This tutorial on how to fix the swap partition not mounting. However, in the above, the mkswap command fails because the device is busy. So I booted from a live USB, ran GParted to reformat the swap partition (which showed up as an unknown fs type), and chrooted into the broken system to try that tutorial again. I also adjusted /etc/initramfs-tools/conf.d/resume and /etc/fstab to reflect the new UUID generated from formatting the partition as a swap. That still didn't work; instead of /dev/mapper/cryptswap1 not present, "The disk drive with UUID=[swap partition's UUID] is not ready yet or not present..." So I decided to start afresh as though I never had created a swap partition in the first place. From the Live USB, I deleted the swap partition altogether (which, again showed up in GParted as an unknown fs type), removed the swap and cryptswap entries in /etc/fstab as well as removed /etc/initramfs-tools/conf.d/resume and /etc/crypttab. At this point the main system shouldn't be considered broken because there is no swap partition and no instructions to mount one, right? I didn't get any errors during startup. I followed the same instructions to create and encrypt the swap partition, starting with creating a partition for the swap, though I think fdisk said a reboot was necessary to see changes. I was confident the 3rd process above would work, but the problem yet persists. Some relevant info (/dev/sda8 is the swap partition): /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=4c11e82c-5fe9-49d5-92d9-cdaa6865c991 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=4031413e-e89f-49a9-b54c-e887286bb15e /boot ext4 defaults 0 2 # /home was on /dev/sda7 during installation UUID=d5bbfc6f-482a-464e-9f26-fd213230ae84 /home ext4 defaults 0 2 # swap was on /dev/sda8 during installation UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 output of sudo mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /boot type ext4 (rw) /dev/sda7 on /home type ext4 (rw) /home/undisclosed/.Private on /home/undisclosed type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=cbae1771abd34009,ecryptfs_fnek_sig=7cefe2f59aab8e58) gvfs-fuse-daemon on /home/undisclosed/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=undisclosed) output of sudo blkid (note that /dev/sda8 is missing): /dev/sda1: LABEL="SYSTEM" UUID="960490E80490CC9D" TYPE="ntfs" /dev/sda2: UUID="D4043140043126C0" TYPE="ntfs" /dev/sda3: LABEL="Shared" UUID="80F613E1F613D5EE" TYPE="ntfs" /dev/sda5: UUID="4031413e-e89f-49a9-b54c-e887286bb15e" TYPE="ext4" /dev/sda6: UUID="4c11e82c-5fe9-49d5-92d9-cdaa6865c991" TYPE="ext4" /dev/sda7: UUID="d5bbfc6f-482a-464e-9f26-fd213230ae84" TYPE="ext4" /dev/mapper/cryptswap1: UUID="41fa147a-3e2c-4e61-b29b-3f240fffbba0" TYPE="swap" output of sudo fdisk -l: Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdec3fed2 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 210135039 104862720 7 HPFS/NTFS/exFAT /dev/sda3 210135040 415422463 102643712 7 HPFS/NTFS/exFAT /dev/sda4 415424510 625141759 104858625 5 Extended /dev/sda5 415424512 415922175 248832 83 Linux /dev/sda6 415924224 515921919 49998848 83 Linux /dev/sda7 515923968 621389823 52732928 83 Linux /dev/sda8 621391872 625141759 1874944 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 1919 MB, 1919942656 bytes 255 heads, 63 sectors/track, 233 cylinders, total 3749888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf5321b5 /etc/initramfs-tools/conf.d/resume file: RESUME=UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 /etc/crypttab file: cryptswap1 /dev/sda8 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 output of sudo swapon -as: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 1874940 0 -1 output of sudo free -m: total used free shared buffers cached Mem: 1476 1296 179 0 35 671 -/+ buffers/cache: 590 886 Swap: 1830 0 1830 So, how can this be fixed?

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • Workaround the flip queue (AKA pre-rendered frames) in OpenGL?

    - by user41500
    It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

  • Recommended 2D Game Engine for prototyping

    - by Thomas Dufour
    What high-level game engine would you recommend to develop a 2D game prototype on windows? (or mac/linux if you wish) The kind of things I mean by "high-level" includes (but is definitely not limited to): not having to manage low-level stuff like screen buffers, graphics contexts having an API to draw geometric shapes well, I was going to omit it but I guess being based on an actual "high-level" language is a plus (automatic resource management and the existence a reasonable set of data structures in the standard library come to mind). It seems to me that Flash is the proverbial elephant in the room for this query but I'd very much like to see different answers based on all kinds of languages or SDKs.

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • What utility is like Ten Clips, providing an enumerated clipboard?

    - by Aaron Newton
    A very useful (Windows) utility I use is TenClips - http://www.paludour.net/TenClips.html It allows you to create enumerated clipboards/emacs-like buffers easily using ctrl + f1, ctrl + f2, ctrl + f3, etc., copy to the clipboard in the first buffer, switch to the second buffer, copy without loosing our first buffer, switch back to the first buffer and paste, switch to the second buffer and paste and so forth. Does something like this exist for Ubuntu? The closest post I could find was Looking for an application that saves clipboard history which recommended Parcellite (http://parcellite.sourceforge.net/?page_id=2) - which keeps the history - but this is not quite what I'm after. If not I might make this a pet project :D

    Read the article

  • What are some good, simple examples for queues?

    - by Michael Ekstrand
    I'm teaching CS2 (Java and data structures), and am having some difficulty coming up with good examples to use when teaching queues. The two major applications I use them for are multithreaded message passing (but MT programming is out of scope for the course), and BFS-style algorithms (and I won't be covering graphs until later in the term). I also want to avoid contrived examples. Most things that I think of, if I were actually going to solve them in a single-threaded fashion I would just use a list rather than a queue. I tend to only use queues when processing and discovery are interleaved (e.g. search), or in other special cases like length-limited buffers (e.g. maintaining last N items). To the extent practical, I am trying to teach my students good ways to actually do things in real programs, not just toys to show off a feature. Any suggestions of good, simple algorithms or applications of queues that I can use as examples but that require a minimum of other prior knowledge?

    Read the article

  • Take a single snaphot from a webcam with a delay

    - by cedivad
    I use gst-launch-0.10 v4l2src num-buffers=1 ! jpegenc ! filesink location=$HOME/Desktop/test.jpg to take snapshots. It works well. However in some light situation I need to drop some of the first frames the webcam outputs so that the webcam white balance doesn't provide me with an impossible to view image. Do you know how could I do that? With the GUI of cheese I can do it without any problem, but I need to automate this via CLI. Many thanks.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >