Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 27/39 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • random hard disk errors

    - by AugB
    For the past 2 years or so (4 year old custom build) I've been getting random moments where everything stops responding (or takes a very long time to respond) followed by I/O and hdd not detected errors on restart. To fix it, all I usually need to do is unplug my SATA cables from the hdd and mobo and plug them back in again and the problem disappears, at least for a little while (it ranges anywhere from a day to a few months time). Sometimes even a startup repair does the job. I've done multiple reformats and have also ran chkdsk more times than I can remember and both do not seem to help in the long run. Both the drives seem to be exhibiting the same problem. Have both my hdds been "dying" for the past couple of years, even though they are fully functional besides these occasional hiccups? Does the issue lie elsewhere? All feedback is appreciated. System specs: Biostar Tpower i45 mobo 2x WD Caviar 640GB hdds Zalman 750w psu Radeon 5870 gpu 2x2gb Gskill DDR2 ram Win7 64

    Read the article

  • I suspect that my HDD is causing hardlocks, as all other components have been replaced. How can I check this theory and solve the potential cause?

    - by user867814
    I have had this problem over quite a while now, thorough multiple Linux kernel versions and distributions, as well as replacement of all components, aside from my main HDD - RAM, GPU(twice), mother board, CPU, power supply. What happens is, at one point during the operation of the PC, it will hardlock - everything stops working, external HDD is not shut down correctly and continues to spin until I plug it out and in, there are no system/kernel logs of any kind, and no otherwise nothing that would suggest a cause. Another reason for my suspicion is that the failures happen almost exclusively during HDD read/write activity - shutdown(happens nearly 1/3 of the time so far, it's only been few days though), launching programs, and once during operation of apt. I hope the post is descriptive enough, if you need any additional info, ask(and tell me how to prepare/obtain it), and I will provide. If I'm wrong, point me in the right direction. Thanks in advance.

    Read the article

  • Windows 7 - all usb devices go to sleep im idle mode

    - by dvdx
    A strange thing happened after a few updates to the system: Intel rapid storage SSD firmware update Intel Ethernet adapter update GPU Intel update When the computer turns off the screen (after 5 min), an unknown time later, all the USB devices stop working. Sound card Mouse Keybord etc. I can't turn them back on, so I can't wake up the screen or do anything except turn the computer off and back on. I checked my power save profile and all is OK there. I changed in the Device Manager, the Allow USB to sleep in all the hubs. How can I fix this??

    Read the article

  • Display issues on new OpenSUSE install

    - by user1319182
    I installed OpenSUSE 13.1 on my newly built PC, but the display is just horrible : the edges of the screen are missing. For example, I can't see all the top part, I can barely read the date and I see "ctivities" instead of "Activities". However, when I take a screenshot everything seems to be fine (the cursor doesn't appear though) the characters are sometimes too big and sometimes too small the cursor is huge and many other strange things. I took a few pictures I'm using an Intel integrated GPU (HD4400) and made all the possible updates with YaST. Any idea how I can fix this? Thanks

    Read the article

  • Received a RAMPAGE IV FORMULA MOBO. Possible CPU Socket problems

    - by Tantan
    I recently received a Rampage IV Formula MOBO in a trade. From everything I read online this seems like a pretty nice piece of hardware. There's one thing I'm worried about though. It appears as though the socket where the CPU goes might have some bent/missing pins. I'm not exactly a computer expert and I wanted to know if this could be any sort of problem. I would post a picture but I apparently don't have enough rep for that... Also, if I did want to move my current rig onto the new MOBO would it even be worth it? MY CURRENT COMPUTER SPECS i3-3220 CPU 8GB DDR3 RAM GeForce GTX460 GPU With a Biostar MOBO!

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Hardware advice for bitmap / openGL image processing server?

    - by pdizz
    I am trying to work out a build for a processing server to handle bitmap processing as well as openGL rendering for chroma-keying images and Photoshop automation. My searches here and on Google have turned up surprisingly few results, and seeing that there aren't tags for bitmap or image processing I take it this is a specialized application. The bitmap processing is very cpu-intensive while the chroma-keying and Photoshop stuff is gpu-intensive. I doubt this is a case of over-optimization as our company batches thousands of images a day (currently on individual workstations) and any saving in processing time and workstation down-time would be beneficial. Does anyone have any experience with this type of processing server? Any special considerations that would go into a build like this or am I over-thinking it?

    Read the article

  • Best Linux distro for media server on older box

    - by fauxpride
    I have an older machine with these specs: CPU: AMD Athlon X2 @ 2,8 Ghz, 2MB L2 RAM: 4 GB DDR2@ 800 Mhz GPU: Asus 4890 TOP 1 GB I want to turn the machine into a media server via XBMC (so good video and wireless peripherial driver support would also be a plus), but I also want to use it as an OpenVPN server so I can tunnel RDP to my other Windows machine in the network. I mostly want to use a Debian based distro (for the convenience of apt) and right now my options are: Ubuntu, Xubuntu or Mint. Which one do you think is more fitting? Thanks in advance.

    Read the article

  • Sharing a texture resource from DX11 to DX9 to WPF, need to wait for DeviceContext.Flush() to finish

    - by Rei Miyasaka
    I'm following these instructions on TheCodeProject for rendering from DirectX to WPF using D3DImage. The trouble is that now that I have no swap chain to call Present() on -- which according to the article shouldn't be a problem, but it definitely wasn't copying my back buffer. An additional step that I have to take before I can copy the texture to WPF is to share it with a second D3D9Ex device, since D3DImage only works with DX9 (which is understandable, as WPF is built on DX9). To that end, I've modified some SlimDX code to work with DirectX 11. I tried calling DeviceContext.Flush() (the Immediate one) at the end of each render cycle, which kind of works -- most of the time it'll show my renderings, but maybe for maybe 3 or 4 out of 60 frames each second, it'll draw my clear color instead. This makes sense -- Flush() is non-blocking; it doesn't wait for the GPU to do its thing the way SwapChain.Present does. Any idea what the proper solution is? I have a feeling it has something to do with my texture parameters for the back buffer, but I don't know.

    Read the article

  • Direct2d off-screen rendering and hardware acceleration

    - by Goran
    I'm trying to use direct2d to render images off-screen using WindowsAPICodePack. This is easily achieved using WicBitmapRenderTarget but sadly it's not hardware accelerated. So I'm trying this route: Create direct3d device Create texture2d Use texture surface to create render target using CreateDxgiSurfaceRenderTarget Draw some shapes While this renders the image it appears GPU isn't being used at all while CPU is used heavily. Am I doing something wrong? Is there a way to check whether hardware or software rendering is used? Code sample: var device = D3DDevice1.CreateDevice1( null, DriverType.Hardware, null, CreateDeviceOptions.SupportBgra ,FeatureLevel.Ten ); var txd = new Texture2DDescription(); txd.Width = 256; txd.Height = 256; txd.MipLevels = 1; txd.ArraySize = 1; txd.Format = Format.B8G8R8A8UNorm; //DXGI_FORMAT_R32G32B32A32_FLOAT; txd.SampleDescription = new SampleDescription(1,0); txd.Usage = Usage.Default; txd.BindingOptions = BindingOptions.RenderTarget | BindingOptions.ShaderResource; txd.MiscellaneousResourceOptions = MiscellaneousResourceOptions.None; txd.CpuAccessOptions = CpuAccessOptions.None; var tx = device.CreateTexture2D(txd); var srfc = tx.GraphicsSurface; var d2dFactory = D2DFactory.CreateFactory(); var renderTargetProperties = new RenderTargetProperties { PixelFormat = new PixelFormat(Format.Unknown, AlphaMode.Premultiplied), DpiX = 96, DpiY = 96, RenderTargetType = RenderTargetType.Default, }; using(var renderTarget = d2dFactory.CreateGraphicsSurfaceRenderTarget(srfc, renderTargetProperties)) { renderTarget.BeginDraw(); var clearColor = new ColorF(1f,1f,1f,1f); renderTarget.Clear(clearColor); using (var strokeBrush = renderTarget.CreateSolidColorBrush(new ColorF(0.2f,0.2f,0.2f,1f))) { for (var i = 0; i < 100000; i++) { renderTarget.DrawEllipse(new Ellipse(new Point2F(i, i), 10, 10), strokeBrush, 2); } } var hr = renderTarget.EndDraw(); }

    Read the article

  • Doubt about texture waves in CG Ocean Shader

    - by Alexandre
    I'm new on graphical programming, and I'm having some trouble understanding the Ocean Shader described on "Effective Water Simulation from Physical Models" from GPU Gems. The source code associated to this article is here. My problem has been to understand the concept of texture waves. First of all, what is achieved by texture waves? I'm having a hard time trying to figure out it's usefulness. In the section 1.2.4 of the article, it does say that the waves summed into the texture have the same parametrization as the waves used for vertex positioning. Does it mean that I can't use the texture provided by the source code if I change the parameters of the waves, or add more waves to sum? And in the section 1.4.1, is said that we can assume that there is no rotation between texture space and world space if the texture coordinates for our normal map are implicit. What does mean that the "normal map are implicit'? And why do I need a rotation between texture and world spaces if the normal map are not implicit? I would be very grateful for any help on this.

    Read the article

  • BUILD apps that use C++ AMP

    - by Daniel Moth
    If you are a developer on the Microsoft platform, you are hopefully attending (live or virtually) the sessions of the BUILD conference, aka //build/ in Anaheim, CA. The conference sold out not long after it opened registration, and it achieved that without sharing *any* session details nor a meaningful agenda up until after the keynote today – impressive! I am speaking at BUILD and hope you'll catch my talk at 9am on Friday (the last day of the conference) at Marriott Elite 2 Ballroom. Session details follow. 802 - Taming GPU compute with C++ AMP Developers today inject parallelism into their compute-intensive applications in order to take advantage of multi-core CPU hardware. Beyond CPUs, however, compute accelerators such as general-purpose GPUs can provide orders of magnitude speed-ups for data parallel algorithms. How can you as a C++ developer fully utilize this heterogeneous hardware from your Visual Studio environment?  How can you benefit from this tremendous performance boost in your Visual C++ solutions without sacrificing developer productivity?  The answers will be presented in this session about C++ Accelerated Massive Parallelism. I'll be covering a lot of the material I've been recently blogging about on my blog that you are reading, which I have also indexed over on our team blog under the title: "C++ AMP in a nutshell". Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Problem loading shaders with slimdx

    - by Levi
    I'm attempting to load an FX file in slimdx, I've got this exact FX file loading and compiling fine with XNA 4.0 but I'm getting errors with slimdx, here's my code to load it. using SlimDX.Direct3D11; using SlimDX.D3DCompiler; public static Effect LoadFXShader(string path) { Effect shader; using (var bytecode = ShaderBytecode.CompileFromFile(path, null, "fx_2_0", ShaderFlags.None, EffectFlags.None)) shader = new Effect(Devices.GPU.GraphicsDevice, bytecode); return shader; } Here's the shader: #define TEXTURE_TILE_SIZE 16 struct VertexToPixel { float4 Position : POSITION; float2 TextureCoords: TEXCOORD1; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Constants -------- float4x4 xView; float4x4 xProjection; float4x4 xWorld; float4x4 preViewProjection; //float random; //------- Texture Samplers -------- Texture TextureAtlas; sampler TextureSampler = sampler_state { texture = <TextureAtlas>; magfilter = Point; minfilter = point; mipfilter=linear; AddressU = mirror; AddressV = mirror;}; //------- Technique: Textured -------- VertexToPixel TexturedVS( byte4 inPos : POSITION, float2 inTexCoords: TEXCOORD0) { inPos.w = 1; VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul (xView, xProjection); float4x4 preWorldViewProjection = mul (xWorld, preViewProjection); Output.Position = mul(inPos, preWorldViewProjection); Output.TextureCoords = inTexCoords / TEXTURE_TILE_SIZE; return Output; } PixelToFrame TexturedPS(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; Output.Color = tex2D(TextureSampler, PSIn.TextureCoords); if(Output.Color.a != 1) clip(-1); return Output; } technique Textured { pass Pass0 { VertexShader = compile vs_2_0 TexturedVS(); PixelShader = compile ps_2_0 TexturedPS(); } } Now this exact shader works fine in XNA, but in slimdx I get the error ChunkDefault.fx(28,27): error X3000: unrecognized identifier 'byte4'

    Read the article

  • Enable compiz on intel core i5 (Nvidia GT330M) based laptop

    - by Eshwar
    Hi, I am trying to enable compiz on my laptop via Desktop Effects but it does not allow it. I modified the xorg.conf file as on the compiz wiki but still no luck. So can someone just tell me how to enable compiz desktop on an Intel i5 based system. This is an Arrandale processor so its got the graphics bit on the processor itself. My system also has a discrete graphics card (Nvidia GT330M - yup its those hybrid graphics combos n- not Optimus). As far as i know the nvidia gpu is not being used since the intel one is enabled and there is no bios route to disable it. The laptop is a Dell Vostro 3700 with bios version A10 I did lotsa google searches about intel compiz, etc but not a single conclusive guide as to how to enable it. so my guess is it should work out of the box. but it doesn't. glxinfo gives me: name of display: :0.0 Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". 3 GLXFBConfigs: visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat ---------------------------------------------------------------------- Segmentation fault lsbusb gives me: 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 18) 01:00.0 VGA compatible controller [0300]: nVidia Corporation GT216 [GeForce GT 330M] [10de:0a29] (rev a2)

    Read the article

  • Machine Check Exception

    - by Karl Entwistle
    When trying to install ubuntu-12.04-desktop-amd64.iso from USB I get one of the following errors http://en.wikipedia.org/wiki/Machine_Check_Exception states the error can occur due to -poorly fitted heatsink/computer fans (the same problem can happen with excessive dust in the CPU fan) -an overloaded internal or external power supply (fixable by upgrading) So I tried the following -Using rubbing alcohol to remove all the thermal paste from the CPU and heatsink, I then reseated the CPU after checking all the pins on the MOBO, everything seems fine. -Boot without the GPU to see if was the PSU that is being over stressed. -Removing all RAM apart from one stick and running a Memtest86 which it passed -Using Ubuntu 10.04.4 Desktop 64 bit (Different USB slots and USB sticks) -Using Ubuntu 12.04 Desktop 64 bit (Different USB slots and USB sticks) -Reset the BIOS using the Clear CMOS jumper -Removing all HD power cables and SATA cables -Updating the BIOS from F2 to F6 My PC is using the following parts. -Gigabyte GA-Z77-DS3H (F6 BIOS) -Intel Core i7 3770K 3.5GHz Socket 1155 -G-Skill 8GB (2x4GB) DDR3 1600Mhz RipjawsX Memory Kit CL9 (9-9-9-24) 1.5V -Be Quiet Shadow Rock Pro -Be Quiet Pure Power 730W Modular PSU -Sapphire HD 6870 1GB GDDR5 DVI HDMI DisplayPort PCI-E Graphics Card Any ideas?

    Read the article

  • How to install Radeon 3670 HD graphics drivers for Ubuntu 10.04 64 bit with OpenGL 2.0 support?

    - by Daniel
    I've been having trouble with getting graphics drivers to work that support OpenGL 2.0. I've had some luck with the Ubuntu drivers, however these only support OpenGL 1.3. I thought I would document the methods that I have tried both to see if anyone else has ideas, and to save time for people with a similar problem. System details: Ubuntu 10.04 (Lucid) 64 bit Kernel Linux 2.6.32-44-generic GNOME 2.30.2 ATI Mobility Radeon HD 3670 Attempted Methods The methods I have tried are: 1. Installing Proprietary Drivers using the "Hardware Drivers" (Jockey) GUI This GUI offers an "ATI/AMD proprietary FGLRX graphics driver" however any attempts to install it result in a "Sorry, installation of this driver failed" error. The log file is here. There is an Ask Ubuntu question that covers this scenario, and notes that there is a known bug with Jockey. 2. Installing the Proprietary Drivers manually The answer to the question above linked to this wiki page, which gives instructions for installing Catalyst 12.6. This supported hardware list states that the 3670 is not supported in 12.6, and 12.4 must be used. This is somewhat confusing, as AMD's website suggests that the 12.6 driver should be installed for the 3670. There have been user reports that R600 (the GPU inside the 3670 card) doesn't work with 12.6, so I'm sticking with 12.4. I'm following these instructions to install the proprietary drivers on Lucid. I downloaded the 12.4 driver from the AMD website. Building the package worked fine, generating the fglrx, fglrx-dev, fglrx-amdcccle, and fglrx-modaliases deb packages successfully. However, when I try to install these using dpkg it gives me these errors. The make log referenced in the error is here. Ask Ubuntu References What is the correct way to install ATI Catalyst Video Drivers? Cannot install ATI/AMD FGLRX restricted graphic drivers Is my ATI graphics card supported in Ubuntu?

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • Shockwave Flash crashes with Chromium and Firefox

    - by Stephan
    Since updating to Ubuntu 13.10, Shockwave Flash does not work in Chromium or Firefox. Both show a "Shockwave Flash has crashed" dialog. Chromium 29.0.1547.65 After loading a page with a Flash video, I get this warning on the console twice: NVIDIA: could not open the device file /dev/nvidia0 (Operation not permitted). When I try to play the video, it crashes and I receive these disorted error messages: (exe:14868): Gdk-WARNING **: XID collision, trouble ahead [xcb] Extra reply data still left in queue [xcb] This is most likely caused by a broken X extension library [xcb] Aborting, sorry about that. owser --type=plugin --plugin-path=/usr/lib/flashplugin-installer/libflashplayer.so --lang=de --channel=14560.18.20766867: ../../src/xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed. Firefox 25.0 With Firefox, I get these errors: ###!!! ABORT: Request 154.24: BadValue (integer parameter out of range for operation); 3 requests ago: file /build/buildd/firefox-25.0+build3/toolkit/xre/nsX11ErrorHandler.cpp, line 157 WARNING: pipe error (110): Connection reset by peer: file /build/buildd/firefox-25.0+build3/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 437 ###!!! [Parent][RPCChannel] Error: Channel error: cannot send/recv What I tried so far Reinstalling flashplugin-installer Changing permissions of /dev/nvidia0 It seems that Flash Aid is no longer available. GPU acceleration is working fine, e.g. for Portal. Does anyone know how to fix this?

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • no hdmi sound on Ubuntu 12.04 LTS

    - by bart
    i'm very new to ubuntu/linux. I installed Ubuntu 12.04 LTS on my vista laptop (nvidia gpu) in dual boot and with some help of google i'm almost ready to go. The only thing that i can't figure out is how to play sound trough my hdmi connected to my tv. Speakers of the laptop are ok, in te sound settings i can see -hdmi -digital -speaker outputs. The top 2 won't play sound so i googled around, found a bunch of things to try but the only thing succesfull so far was to erase all the drivers from the audio settings and they stayed gone. So finally i re-installed from scratch but still no sound trough hdmi. Re-installed 6times in the last 3 days now so the only thing left for me is reaching out for help/advice about this. How can i play sound on my tv connected with hdmi? I will be glad to give more info if needed cuz this is driving me crazy, thanks for your help in advance ! Bart.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • Acer aspire one d270 can not set brightness

    - by Marko
    I hope you can help me figure out how to set the brightness at my netbook. Following problem appears since I installed ubuntu 11.10 on my acer: I am not able to adjust the brightness by FN Keys nor manually at "systemsettings-display". After searching with google for a while, I found a way via the terminal to adjust it with the folloqing command: "sudo setpci -s 00:02.0 f4.b=7f" ( from 00-9f). That was a major breakthrough for me as I am still new to Linux OS. But still seeking a way to get the FN keys for brightness to work, I kept searching until I found "askubuntu.com". I read through various Questions by other acer users and tried there solutions, but unfortunately none worked out for me. From this thread: fn + arrow keys don't adjust actual brightness on an Acer Aspire 5740 "sudo gedit /etc/X11/xorg.conf". This command did not work because the file was not found. I also used nano instead of gedit, but the file was empty( I think it just created the file since it did not exist). These commands which i found gave me a boot loop and I had to repair ubuntu: sudo gedit /etc/default/grub Change the line GRUB_CMDLINE_LINUX="" into GRUB_CMDLINE_LINUX="acpi_osi=Linux" sudo update-grub From this post Screen Brightness not adjustable for Acer Aspire S3: I tried the solution from the last post, but it did not work either. Does anyone know what I could try? I would appreciate it, if someone could help me out with this. Thanks in advance Netbook specs: CPU: Intel Atom N2600 Memory: 2gb DDR3 Storage: 320 GB HD GPU: Intel GMA 3600

    Read the article

  • IOS OpenGl transparency performance issue

    - by user346443
    I have built a game in Unity that uses OpenGL ES 1.1 for IOS. I have a nice constant frame rate of 30 until i place a semi transparent texture over the top on my entire scene. I expect the drop in frames is due to the blending overhead with sorting the frame buffer. On 4s and 3gs the frames stay at 30 but on the iPhone 4 the frame rate drops to 15-20. Probably due to the extra pixels in the retina compared to the 3gs and smaller cpu/gpu compared to the 4s. I would like to know if there is anything i can do to try and increase the frame rate when a transparent texture is rendered on top of the entire scene. Please not the the transparent texture overlay is a core part of the game and i can't disable anything else in the scene to speed things up. If its guaranteed to make a difference I guess I can switch to OpenGl ES 2.0 and write the shaders but i would prefer not to as i need to target older devices. I should add that the depth buffer is disabled and I'm blending using SrcAlpha One. Any advice would be highly appreciated. Cheers

    Read the article

  • Best practice for setting Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice for setting Effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of set operation like: private Matrix _world; public Matrix World { get{ return _world; } set { if (value == world) return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } } Thanking you in anticipation.

    Read the article

  • Is HTML5/WebGL performance bad on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >