Search Results

Search found 8373 results on 335 pages for 'hardware recommendation'.

Page 108/335 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Need Help Unable to Mount Location

    - by Don't ASk Ubun
    I am not able to start Windows and am using a DVD copy of Ubuntu to start up. I see my 750 GB Hard Disk, but if I click it i get this error: Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. After googling for a while I think I need to do sudo apt-get install ntfsprogs but when I try that: E: Package 'ntfsprogs' has no installation candidate My problem is a lot like this thread

    Read the article

  • Inside the Guts of a DSLR

    - by Jason Fitzpatrick
    It’s safe to assume that there is a lot more going on inside your modern DSLR than your grandfather’s Kodak Brownie, but just how much hardware is packed into the small casing of your average DSLR is quite surprising. Over at iFixit they’ve done a tear down of Nikon’s newest prosumer camera, the Nikon D600. The guts of the DSLR are absolutely bursting with hardware and flat-ribbon cable as seen in the photo above. For a closer look at the individual parts and to see it further torn down, hit up the link below. Nikon D600 Teardown [iFixit via Extreme Tech] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Google Open-Sources Their Book Scanner

    - by Jason Fitzpatrick
    Google has released the hardware and software source for their high speed/non-destructive book scanner–If you’re looking to scan a large volume of books, save yourself the design work and check out the Linear Book Scanner project. The design is pretty slick; the scanner uses vacuum pressure to automatically turn the pages as it works. Check out the video above to see a Google Tech Talk about the project and then hit up the link below to grab the hardware and software files. Linear Book Scanner [via Hack A Day] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • How to retrieve data from a corruped volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed(probably due to hardware error and in the end I was getting error like Unknown filesystem ..... grub> .. GRUB console before i could take some action) and i reinstalled the same version form USB stick. I had ubuntu installed in ext4 file system and I am also having the same filesystem in the same hard disk on different drive. When I try to access my previous filesystem, i get error Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume, I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • 0.00006103515625 GB of RAM. Is .NET MicroFramework part of Windows CE?

    - by Rocket Surgeon
    The .NET MicroFramework claims to work on 64K RAM and has list of compatible targets vendors. At the same time, same vendors who ship hardware and create Board Support Packages (vendors like Adeneo) keep releasing something named Windows 7 CE BSP for the same hardware targets. Obviously the OS as heavy as WinCE needs more than 64K RAM. So, somehow .NET MicroFramework is relevant to WinCE, but how ? Is it part of bigger OS or is it base of it, or are both mutually exclusive ? Background: 0.00006103515625 GByte of RAM is same as 64Kbyte of RAM. I am looking for possiblity to use Microsoft development tools for small target like BeagleBone. http://www.adeneo-embedded.com/About-Us/News/Release-of-TI-BeagleBone Nice. Now .. where is a MicroFramework for the same beaglebone ? Is it inside the released pile ?

    Read the article

  • Running C++ AMP kernels on the CPU

    - by Daniel Moth
    One of the FAQs we receive is whether C++ AMP can be used to target the CPU. For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface. When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU. Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow.  In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway). A few things to know about WARP It is our fallback CPU solution, not intended as a primary target of C++ AMP. WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP. What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on. It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship. When you create a WARP accelerator, its is_emulated property returns true. WARP does not currently support double precision.   BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Will Apple abandon OpenCL?

    - by John
    I am developing OpenCL applications, amongst others for MacOS. The new Macbook pro 13 inch comes with an Intel HD Graphics 3000 card so it seams reasonable to assume all their mainstream computers like Macbook and Mac mini will also come out with this Intel graphics card soon. OpenCL is not available for Intel graphics cards. Intel having a terrible reputation in developing graphics drivers and Apple knowing this makes me wonder Apple is abandoning OpenCL already again. Especially considering OpenCL should run anywhere, not only on high end systems. Developing applications only for the high end Macs with dedicated graphics hardware or for previous generation hardware with the Geforce 320M would not be a feasible option for me. Does anybody have any thoughts on this?

    Read the article

  • 50 Billion Served: Java Embedded on Devices

    - by Tori Wieldt
    It doesn't matter if it is 50 billion or 24 billion, just suffice it to stay that there will be MANY connected devices in the year 2020. With just 24 billion devices, they will outnumber humans six to one! So as a developer, you don't want to ignore this opportunity. What if you could use your Java skills and deploy an app to a fraction of these devices (don't be greedy, how about just, say, 118,000 of them)? Fareed Suliman, Java ME Product Manager had lots of good news for Java Developers in his presentation Modernizing the Explosion of Advanced Microcontrollers with Embedded Java at ARM TechCon in Santa Clara, CA last week. "A radical architecture shift is underway in this space, from proprietary to standards-based," he explained.  He pointed out several advantages to using Embedded Java for devices: Java is a proven and open standard. Java provides connectivity, encryption, location, and web services APIs. You don't have to focus on and keep reinventing the plumbing below the JVM. Abstracting the software from the hardware allows you to repeat your app across many devices. Abstracting the software from the hardware allows allows parallel development so you can get your app done more quickly. You already know Java (or you can hire lots of Java talent). Java is a full ecosystem, with Java Embedded plugins for IDEs like Eclipse and NetBeans. Java ME allows for in-field software upgrades. Suliman mentioned two ways developers can start using Java Embedded today:  Java ME Embedded Suite 7.0 Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to embedded devices such as gateways and concentrators. Oracle Java ME Embedded 3.2 Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. Think tiny. Really tiny. And think big.  Read more about Java Embedded at the Oracle Technology Network, and read The Java Source blog Java Embedded Releases from September.

    Read the article

  • Uniquely identify a mobile device

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information Sometimes you need to identify every device your app is installed on uniquely. This is for instance important where you have per-device licensing restrictions. For Win8 store apps, You can use ASHWID (Application Specific Hardware Identifier). ASHWID will be different app to app and device to device. Any hardware changes to the device will cause the unique id to change. You can also detect minor change vs. major change to build custom level of tolerance in what is considered a change. For instance, ejecting a USB stick is a minor change. The below code snippet shows you how to get the unique device id, Read full article ....

    Read the article

  • How do I get my Broadcom BCM4313 working correctly?

    - by Ataraxio Panzetta
    I've installed ubuntu on a Asus 1015 netbook. Everything worked out of the box except for the Wireless adapter, which i had to install with the Additional Drivers application. It apparently installed fine and connects to our wireless network, but it only works at a "funny" speed range that goes from 367Bytes to a peak of 3Kb in its best moments. I know for sure the problem is neither the network nor the hardware. Network speeds are normal under windows on this laptop and in other computers with ubuntu aswell. lspci says the card is a BCM4313 model, but the Addittional Drivers Manager says these packecege contains Broadcom 802.11 Linux STA wireless driverfor use with Broadcom's BCM4311-, BCM4312-, BCM4321-, and BCM4322 based hardware seems like it installed the wrong driver... Is there anything I can do? I'm not concerned about compiling the driver or stuff like that, but I'm not sure on where to start... any help or guidance will be very, very appreciated.

    Read the article

  • How to retrieve data from a corrupted volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed, probably due to hardware error (and in the end I was getting errors like Unknown filesystem ..... grub> .., and it went to the GRUB console before I could take any other action). I reinstalled the same version from a USB stick. I had Ubuntu installed with the ext4 file system and I also have the same filesystem in the same hard disk on a different drive. When I try to access my previous filesystem, I get errors: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume ; I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • How to make Pokémon White 3D effect?

    - by Pipo
    I just wondered how to create a 3D effect similar to Pokemon White/Black? It seems to be not polygon based, but created just with sprites. If the perspective changes the sprites stay sharp and don't get blurred. How can I archive this? Source: https://www.youtube.com/watch?v=fZEPUPYOnRc&feature=youtube_gdata_player Edit: Wow, two downvotes because I used a video instead of screenshots? Don't get me wrong, I thank you, because you want to help me, but the 3D effect can be better understand in motion. Anyway, here is a screenshot: http://wearearcade.com/wp-content/uploads/2011/03/pokemon-black-white-starter-town.jpg So, if this is a hardware limitation, how can I archive this o na different hardware, e.g. a HTML5 game? Thank you.

    Read the article

  • Bring 2 GB Large Pages to Solaris 10

    - by Giri Mandalika
    Few facts: 8 KB is the default page size on Oracle Solaris 10 and 11 as of this writing Both hardware and software must have support for 2 GB large pages SPARC T4 processors are capable of supporting 2 GB pages Oracle Solaris 11 kernel has in-built support for 2 GB pages Oracle Solaris 10 has no default support for 2 GB pages Memory intensive 64-bit applications may benefit the most from using 2 GB pages Prerequisites: OS: Oracle Solaris 10 8/11 (Update 10) or later Hardware: Oracle servers with SPARC T4 processors e.g., SPARC T4-1, T4-2 or T4-4, SPARC SuperCluster T4-4 Steps to enable 2 GB large pages on Oracle Solaris 10: Install the latest kernel patch or ensure that 147440-04 or later was installed Check the patch download instructions Add the following line to /etc/system and reboot set max_uheap_lpsize=0x80000000 Finally check the output of the following command when the system is back online pagesize -a eg., % pagesize -a 8192 <-- 8K 65536 <-- 64K 4194304 <-- 4M 268435456 <-- 256M 2147483648 <-- 2G % uname -a SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v Also See: Solaris 9 or later: More performance with Large Pages (MPSS) Large page support for instructions (text) in Solaris 10 1/06 Solaris: How To Disable Out Of The Box (OOB) Large Page Support? Memory fragmentation / Large Pages on Solaris x86

    Read the article

  • Conscience and unconscience from an AI/Robotics POV

    - by Tim Huffam
    Just pondering the workings of the human mind - from an AI/robotics point of view (either of which I know little about)..   If conscience is when you're thinking about it (processing it in realtime)... and unconscience is when you're not thinking about it (eg it's autonomous behaviour)..  would it be fair to say then, that:   - conscience is software   - unconscience is hardware   Considering that human learning is attributed to the number of neural connections made - and repetition is the key - the more the connections, the better one understands the subject - until it becomes a 'known'.   Therefore could this be likened to forming hard connections?  Eg maybe learning would progress from an MCU to FPGA's - therefore offloading realtime process to the hardware (FPGA or some such device)? t

    Read the article

  • Google I/O 2011: Accelerated Android Rendering

    Google I/O 2011: Accelerated Android Rendering Romain Guy, Chet Haase Android 3.0 introduced a new hardware accelerated 2D rendering pipeline. In this talk, you will be introduced to the overall graphics architecture of the Android platform and get acquainted with the various rendering APIs at your disposal. You will learn how to choose the one that best fits your application. This talk will also deliver tips and tricks on how to use the new hardware accelerated pipeline to its full potential. From: GoogleDevelopers Views: 11086 62 ratings Time: 48:58 More in Science & Technology

    Read the article

  • kernel mem parameter

    - by Ashfame
    As a last resort to my question, I am yet to try the mem parameter of kernel to force it to use the specified amount of RAM. Short Summary - I can only see 3.2GB RAM on a 64bit OS and am not sure ifs a hardware limitation, so wants to try as I found a post on Ubuntuforums. My question is if its ok to play with my resident Ubuntu install or should I be using a live bootable usb? What values do I try (I have 6GB with only 3.2GB being usable) and how to keep it safe? I don't want to burn any of my hardware component at this point of time or make the system unbootable. Running Ubuntu 11.10 with kernel 3.0.0-13-generic

    Read the article

  • Will Apple abandon OpenCL?

    - by John
    I am developing OpenCL applications, amongst others for MacOS. The new Macbook pro 13 inch comes with an Intel HD Graphics 3000 card so it seams reasonable to assume all their mainstream computers like Macbook and Mac mini will also come out with this Intel graphics card soon. OpenCL is not available for Intel graphics cards. Intel having a terrible reputation in developing graphics drivers and Apple knowing this makes me wonder Apple is abandoning OpenCL already again. Especially considering OpenCL should run anywhere, not only on high end systems. Developing applications only for the high end Macs with dedicated graphics hardware or for previous generation hardware with the Geforce 320M would not be a feasible option for me. Does anybody have any thoughts on this?

    Read the article

  • Wireless Disabled (Network Manager)

    - by Peter Kihara
    Am Having issues with My Wireless 1 My laptop had dual boot windows 7 and Ubuntu 13.04 I upgraded to windows 8 and after the first 2-3 reboots all was working well in windows and ubuntu then my wireless in ubuntu stoped working saying "wifi is disabled by hardware switch" the hardware switch has no effect I removed network manager installed new firmwares and still nothing the wifi was not working in a moment of testing I installed wifi radar and this can detect the wireless signals and at one point it connected but still Network manger still says disabled. My Laptop is a HP Pavilion dm4 2070us with Wireless Centrino N 1000 I have updated to 13.10 thinking it would fix it but still nothing

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >