Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 31/39 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • OpenCL or OpenGL – which one to use?

    - by Malte Schledjewski
    My Problem involves a black and white image with a black area in the middle. I never worked with OpenGL or OpenCL before so I do not know which one to chose. I want to put some white circles over the area and check at the end whether the whole image is white. I will try many combinations so I want to use the GPU because of its parallelism. Should I use OpenGL and create the circle as a texture and put it on top of the image or should I write some OpenCL kernels which work on the pixel/entries in the matrix?

    Read the article

  • Do GLSL geometry shaders work on the GMA X3100 under OSX

    - by GameFreak
    I am trying to use a trivial geometry shader but when run in Shader Builder on a laptop with a GMA X3100 it falls back and uses the software render. According this document the GMA X3100 does support EXT_geometry_shader4. The input is POINTS and the output is LINE_STRIP. What would be required to get it to run on the GPU (if possible) uniform vec2 offset; void main() { gl_Position = gl_PositionIn[0]; EmitVertex(); gl_Position = gl_PositionIn[0] + vec4(offset.x,offset.y,0,0); EmitVertex(); EndPrimitive(); }

    Read the article

  • Updating a Cuda 4.0 project to Cuda 4.2

    - by aljndrrr
    I have a VS2010 project that was tested with CUDA 4.0, today I installed CUDA 4.2 and I want to update this project, the problem is that when I try to run the project it asks me for cudart32_40_17.dll, but since this is CUDA 4.2 I only have on my folders (C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.2\bin) cudart32_42_9.dll. I already set the Build Customizations to Cuda 4.2 and it compiles without any problem, the only problem is when I try to run it, the app asks me for the previous version of the dll. Is there a way to especify that the project must use cudart32_42_9.dll?

    Read the article

  • TMS320C64x Quick start reference for porgrammers

    - by osgx
    Hello Is thare any quickstart guide for programmers for writing DSP-accelerated appliations for TMS320C64x? I have a program with custom algorythm (not the fft, or usial filtering) and I want to accelerate it using multi-DSP coprocessor. So, how should I modify source to move computation from main CPU to DSPs? What limitations are there for DSP-running code? I have some experience with CUDA. In CUDA I should mark every function as being host, device, or entry point for device (kernel). There are also functions to start kernels and to upload/download data to/from GPU. There are also some limitations, for device code, described in CUDA Reference manual. I hope, there is an similar interface and a documentation for DSP.

    Read the article

  • Java2D OpenGL Hardware Acceleration Doesn't Work

    - by Aaron
    It doesn't work with OpenGL with even the simplest of programs. Here is what I am doing.. java -Dsun.java2d.opengl=True -jar Java2Demo.jar (Java2Demo.jar is usually included with the JDK..) The text output is: OpenGL pipeline enabled for default config on screen 0 When I don't pass in the above VM argument things work fine (but slowly). When I do pass in the above argument nothing shows up... If I move the window around it captures whatever image it was on top of and jumbles it into nonsense. I'm running Windows XP Pro SP3 (Microsoft Windows XP [Version 5.1.2600]) (under Parallels on OS X 10.5.8) I used "Geeks3D GPU Caps Viewer" to tell me I have Open GL version: 2.0 NVIDIA-1.5.48 I have tried this with two version of the JVM. First: java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03) Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode) and second: java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)

    Read the article

  • Matrix inversion in OpenCL

    - by buchtak
    Hi, I am trying to accelerate some computations using OpenCL and part of the algorithm consists of inverting a matrix. Is there any open-source library or freely available code to compute lu factorization (lapack dgetrf and dgetri) of matrix or general inversion written in OpenCL or CUDA? The matrix is real and square but doesn't have any other special properties besides that. So far, I've managed to find only basic blas matrix-vector operations implementations on gpu. The matrix is rather small, only about 60-100 rows and cols, so it could be computed faster on cpu, but it's used kinda in the middle of the algorithm, so I would have to transfer it to host, calculate the inverse, and then transfer the result back on the device where it's then used in much larger computations.

    Read the article

  • Fastest sort of fixed length 6 int array

    - by kriss
    Answering to another StackOverflow question (this one) I stumbled upon an interresting sub-problem. What is the fastest way to sort an array of 6 ints ? As the question is very low level (will be executed by a GPU): we can't assume libraries are available (and the call itself has it's cost), only plain C to avoid emptying instruction pipeline (that has a very high cost) we should probably minimize branches, jumps, and every other kind of control flow breaking (like those hidden behind sequence points in && or ||). room is constrained and minimizing registers and memory use is an issue, ideally in place sort is probably best. Really this question is a kind of Golf where the goal is not to minimize source length but execution speed. I call it 'Zening` code as used in the title of the book Zen of Code optimization by Michael Abrash and it's sequels.

    Read the article

  • OpenGL Shading Language backwards compatibility

    - by Luca
    I've noticed that my GLSL shaders are not compilable when the GLSL version is lower than 130. What are the most critical elements for having a backward compatible shader source? I don't want to have a full backward compatibility, but I'd like to understand the main guidelines for having simple (forward compatible) shaders running on GPU with GLSL lower than 130. Of course the problem could be solved with the preprocessor #if __VERSION__ < 130 #define VERTEX_IN attribute #else #define VERTER_IN in #endif But there probably many issues that I ignore. Thank you

    Read the article

  • 3 or 4 monitors with Nvidia and Ubuntu

    - by Jason
    I saw that you are (were?) running 4 monitors with Ubuntu 8.10 and two Nvidia cards (http://stackoverflow.com/questions/27113/how-to-use-3-monitors). I was curious if you were doing this with Xinerama, a hacked up TwinView config, or multiple X screens, or some other method? Does it work with compiz? I intend to run my Dell 30" in the middle with two 1280x1024 on the sides and continue to use one X screen, and run compiz, on Ubuntu 9.04. Currently, I am using 2 monitors with twinview and compiz, which runs fantastic. I just can't get the third monitor running (unless I enable it in its own X screen, and then enable Xinerama to enable windows to be dragged as if all one X screen, but this breaks compiz, and I don't care much for having separate X screen). I am very interested in knowing how you set up 4 monitors with 2 GPU's. Thanks!

    Read the article

  • HLSL: Enforce Constant Register Limit at Compile Time

    - by Andrew Russell
    In HLSL, is there any way to limit the number of constant registers that the compiler uses? Specifically, if I have something like: float4 foobar[300]; In a vs_2_0 vertex shader, the compiler will merrily generate the effect with more than 256 constant registers. But a 2.0 vertex shader is only guaranteed to have access to 256 constant registers, so when I try to use the effect, it fails in an obscure and GPU-dependent way at runtime. I would much rather have it fail at compile time. This problem is especially annoying as the compiler itself allocates constant registers behind the scenes, on top of the ones I am asking for. I have to check the assembly to see if I'm over the limit. Ideally I'd like to do this in HLSL (I'm using the XNA content pipeline), but if there's a flag that can be passed to the compiler that would also be interesting.

    Read the article

  • HPC (mainly on Java)

    - by Insectatorious
    I'm looking for some way of using the number-crunching ability of a GPU (with Java perhaps?) in addition to using the multiple cores that the target machine has. I will be working on implementing (at present) the A* Algorithm but in the future I hope to replace it with a Genetic Algorithm of sorts. I've looked at Project Fortress but as I'm building my GUI in JavaFX, I'd prefer not to stray too far from a JVM. Of course, should no feasible solution be available, I will migrate to the easiest solution to implement.

    Read the article

  • directx 9 hlsl vs. directx 10 hlsl : syntex the same.

    - by numerical25
    For the past month or so I been busting my behind trying to learn directx. So I been mixing back back and fourth from directx 9 to directx 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. one of the drastic changes I notice is how you get the gpu to recognize your structs. In directx 9 you define the Flexible Vertex Formats your typical set up would be like this... #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) in directx 10 I believe the equivalent is the input vertex description D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in directx 10. it is more descriptive. besides this, what are some of the drastic changes made. and is the hlsl syntax the same for both.

    Read the article

  • Measuring device drivers CPU/IO utilization caused by my program

    - by Lior Kogan
    Sometimes code can utilize device drivers up to the point where the system is unresponsive. Lately I've optimized a WIN32/VC++ code which made the system almost unresponsive. The CPU usage, however, was very low. The reason was 1000's of creations and destruction of GDI objects (pens, brushes, etc.). Once I refactored the code to create all objects only once - the system became responsive again. This leads me to the question: Is there a way to measure CPU/IO usage of device drivers (GPU/disk/etc) for a given program / function / line of code?

    Read the article

  • DirectX 9 HLSL vs. DirectX 10 HLSL: syntax the same?

    - by numerical25
    For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats. Your typical set up would be like this: #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) In DirectX 10, I believe the equivalent is the input vertex description: D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?

    Read the article

  • How do I use compiler intrinsic __fmul_?

    - by Eric Thoma
    I am writing a massively parallel GPU application. I have been optimizing it by hand. I received a 20% performance increase with _fdividef(x, y), and according to The Cuda C Programming Guide (section C.2.1), using similar functions for multiplication and adding is also beneficial. The function is stated as this: "_fmulrn,rz,ru,rd". __fdividef(x,y) was not stated with the arguments in brackets. I was wondering, what are those brackets? If I run the simple code: int t = __fmul_(5,4); I a compiler error about how _fmul is undefined. I have the CUDA runtime included, so I don't think it is a setup thing; rather it is something to do with those square brackets. How do I correctly use this function? Thank you.

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • Modifying an image with OpenGL ?

    - by chmike
    I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely. I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way. I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ? Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.

    Read the article

  • Can I program Nvidia's CUDA using only Python or do I have to learn C?

    - by Aquateenfan
    I guess the question speaks for itself. I'm interested in doing some serious computations but am not a programmer by trade. I can string enough python together to get done what I want. But can I write a program in python and have the GPU execute it using CUDA? Or do I have to use some mix of python and C? The examples on Klockner's (sp) "pyCUDA" webpage had a mix of both python and C, so I'm not sure what the answer is. If anyone wants to chime in about Opencl, feel free. I heard about this CUDA business only a couple of weeks ago and didn't know you could use your video cards like this. thx

    Read the article

  • webgl adding projection doesnt display object

    - by dazed3confused
    I am having a look at web gl, and trying to render a cube, but I am having a problem when I try to add projection into the vertex shader. I have added an attribute, but when I use it to multiple the modelview and position, it stops displaying the cube. Im not sure why and was wondering if anyone could help? Ive tried looking at a few examples but just cant get this to work vertex shader attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); //gl_Position = uMVMatrix * vec4(aVertexPosition, 1.0); } fragment shader #ifdef GL_ES precision highp float; // Not sure why this is required, need to google it #endif uniform vec4 uColor; void main() { gl_FragColor = uColor; } function init() { // Get a reference to our drawing surface canvas = document.getElementById("webglSurface"); gl = canvas.getContext("experimental-webgl"); /** Create our simple program **/ // Get our shaders var v = document.getElementById("vertexShader").firstChild.nodeValue; var f = document.getElementById("fragmentShader").firstChild.nodeValue; // Compile vertex shader var vs = gl.createShader(gl.VERTEX_SHADER); gl.shaderSource(vs, v); gl.compileShader(vs); // Compile fragment shader var fs = gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(fs, f); gl.compileShader(fs); // Create program and attach shaders program = gl.createProgram(); gl.attachShader(program, vs); gl.attachShader(program, fs); gl.linkProgram(program); // Some debug code to check for shader compile errors and log them to console if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(vs)); if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(fs)); if (!gl.getProgramParameter(program, gl.LINK_STATUS)) console.log(gl.getProgramInfoLog(program)); /* Create some simple VBOs*/ // Vertices for a cube var vertices = new Float32Array([ -0.5, 0.5, 0.5, // 0 -0.5, -0.5, 0.5, // 1 0.5, 0.5, 0.5, // 2 0.5, -0.5, 0.5, // 3 -0.5, 0.5, -0.5, // 4 -0.5, -0.5, -0.5, // 5 -0.5, 0.5, -0.5, // 6 -0.5,-0.5, -0.5 // 7 ]); // Indices of the cube var indicies = new Int16Array([ 0, 1, 2, 1, 2, 3, // front 5, 4, 6, 5, 6, 7, // back 0, 1, 5, 0, 5, 4, // left 2, 3, 6, 6, 3, 7, // right 0, 4, 2, 4, 2, 6, // top 5, 3, 1, 5, 3, 7 // bottom ]); // create vertices object on the GPU vbo = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW); // Create indicies object on th GPU ibo = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indicies, gl.STATIC_DRAW); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.enable(gl.DEPTH_TEST); // Render scene every 33 milliseconds setInterval(render, 33); } var mvMatrix = mat4.create(); var pMatrix = mat4.create(); function render() { // Set our viewport and clear it before we render gl.viewport(0, 0, canvas.width, canvas.height); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.useProgram(program); // Bind appropriate VBOs gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); // Set the color for the fragment shader program.uColor = gl.getUniformLocation(program, "uColor"); gl.uniform4fv(program.uColor, [0.3, 0.3, 0.3, 1.0]); // // code.google.com/p/glmatrix/wiki/Usage program.uPMatrix = gl.getUniformLocation(program, "uPMatrix"); program.uMVMatrix = gl.getUniformLocation(program, "uMVMatrix"); mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 1.0, 10.0, pMatrix); mat4.identity(mvMatrix); mat4.translate(mvMatrix, [0.0, -0.25, -1.0]); gl.uniformMatrix4fv(program.uPMatrix, false, pMatrix); gl.uniformMatrix4fv(program.uMVMatrix, false, mvMatrix); // Set the position for the vertex shader program.aVertexPosition = gl.getAttribLocation(program, "aVertexPosition"); gl.enableVertexAttribArray(program.aVertexPosition); gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 3*4, 0); // position // Render the Object gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0); } Thanks in advance for any help

    Read the article

  • How does Photoshop (Or drawing programs) blit?

    - by user146780
    I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit? I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help. I don't think super efficient algorithms could justify that? Thanks

    Read the article

  • How much market shares OpenGL2.0 in iPhone os hardwares(iPhone/iPot Touch)

    - by Eonil
    I'm planning making a game for AppStore, so I'm studying GLES. But, GLES 1.1 and 2.0 APIs are different about handling in some features.(and limitations) I have not enough time to consider both of them, I have to choosing one. 2.0 is clearly better in developer's view, but I'm worry about it's market share. I wish most users moved on newer SGX based hardware, but in fact, I don't know. Does anybody have information about location of those hardware ratio data in iPhone OS supported hardwares? (iPhone/iPod touch, per GPU) Please let me know.

    Read the article

  • C++11: thread_local or array of OpenCL 1.2 cl_kernel objects?

    - by user926918
    I need to run several C++11 threads (GCC 4.7.1) parallely in host. Each of them needs to use a device, say a GPU. As per OpenCL 1.2 spec (p. 357): All OpenCL API calls are thread-safe75 except clSetKernelArg. clSetKernelArg is safe to call from any host thread, and is safe to call re-entrantly so long as concurrent calls operate on different cl_kernel objects. However, the behavior of the cl_kernel object is undefined if clSetKernelArg is called from multiple host threads on the same cl_kernel object at the same time. An elegant way would be to use thread_local cl_kernel objects and the other way I can think of is to use an array of these objects such that i'th thread uses i'th object. As I have not implemented these earlier I was wondering if any of the two are good or are there better ways of getting things done. TIA, S

    Read the article

  • OpenGL performance on rendering "virtual gallery" (textures)

    - by maticus
    I have a considerable (120-240) amount of 640x480 images that will be displayed as textured flat surfaces (4 vertex polygons) in a 3D environment. About 30-50% of them will be visible in a given frame. It is possible for them to crossover. Nothing else will be present in the environment. The question is - will the modern and/or few-years-old (lets say Radeon 9550) GPU cope with that, and what frame rate can I expect? I aim for 20FPS, but 30-40 would be nice. Would changing the resolution to 320x240 make it more probable to happen? I do not have any previous experience with performance issues of 3D graphics on modern GPUs, and unfortunately I must make a design choice. I don't want to waste time on doing something that couldn't have worked :-)

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • WPF vs. WinForms - a Delphi programmer's perspective?

    - by Robert Oschler
    I have read most of the major threads on WPF vs. WinForms and I find myself stuck in the unfortunate ambivalence you can fall into when deciding between the tried and true previous tech (Winforms), and it's successor (WPF). I am a veteran Delphi programmer of many years that is finally making the jump to C#. My fellow Delphi programmers out there will understand that I am excited to know that Anders Hejlsberg, of Delphi fame, was the architect behind C#. I have a strong addiction to Delphi's VCL custom components, especially those involved in making multi-step Wizards and components that act as a container for child components. With that background, I am hoping that those of you that switched from Delphi to C# can help me with my WinForms vs. WPF decision for writing my initial applications. Note, I am very impatient when coding and things like full fledged auto-complete and proper debugger support can make or break a project for me, including being able to find readily available information on API features and calls and even more so, workarounds for bugs. The SO threads and comments in the early 2009 date range give me great concern over WPF when it comes to potential frustrations that could mar my C# UI development coding. On the other hand, spending an inordinate amount of time learning an API tech that is, even if it is not abandoned, soon to be replaced (WinForms), is equally troubling and I do find the GPU support in WPF tantalizing. Hence my ambivalence. Since I haven't learned either tech yet I have a rare opportunity to get a fresh start and not have to face the big "unlearning" curve I've seen people mention in various threads when a WinForms programmer makes the move to WPF. On the other hand, if using WPF will just be too frustrating or have other major negative consequences for an impatient RAD developer like myself, then I'll just stick with WinForms until WPF reaches the same level of support and ease of use. To give you a concrete example into my psychology as a programmer, I used VB and subsequently Delphi to completely avoid altogether the very real pain of coding with MFC, a Windows UI library that many developers suffered through while developing early Windows apps. I have never regretted my luck in avoiding MFC. It would also be comforting to know if Anders Hejlsberg had a hand in the architecture of WPF and/or WinForms, and if there are any disparities in the creative vision and ease of use embodied in either code base. Finally, for the Delphi programmers again, let me know how much "IDE schock" I'm in for when using WPF as opposed to WinForms, especially when it comes to debugger support. Any job market comments updated for 2011 would be appreciated too. -- roschler

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >