Search Results

Search found 7756 results on 311 pages for 'hardware acceleration'.

Page 80/311 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Microphone array support in Windows. Info on performance and compatible hardware?

    - by exinocactus
    It is officially claimed by Microsoft (Audio Device Technologies for Windows), that Windows Vista has an integrated system-level support of microphone arrays for improved sound capturing by isolating a sound source in target direction and rejecting ambient noise and reverberation. In more technical terms, an implementation of an adaptive beamformer. Theoretically, microphone arrays with 2-4 mics can substantially improve SNR under some conditions like speaker in front of the laptop in noisy environment (airport, cafe). Surprisingly, though, I find very little information about commercially-available products supporting these new features. I mean products like portable usb micropone arrays or laptops or flat screens with integrated mic arrays. I could only find info about two laptop models having "noise cancelling digital array microphone". These are Dell Latitude and Eee PC 1008P-KR. Now my questions: Do you have any experience with the Windows beamformer implementation? For instance, in the above mentioned laptops. How well does it work? Are there any tests results available in the net or in print (papers?)? Do you know about other microphone array hardware? What could be the reason why mic array technology didn't get sucess Is there mic arrays support in 'Windows 7'?

    Read the article

  • Linux 'top' utility widly inaccurate (more so for multi-CPU/core hardware)?

    - by amn
    Hi all. After using 'top' for long time, albeit basically, I have grown to distrust it's '% CPU' column reports. I have a 8-core (quad core Intel i7 920 with hyperthreading) hardware, and see some wild numbers when running a process that should not use more than 5% overall. top happily reports 50%, and I suspect it is not so. My question is, is it a known fact that it's inaccurate when several CPUs/cores are present? I used 'mpstat' from the 'sysstat' package, and it's showings are much more conservative, certainly within my expectations. I did press '1' for 'top' to switch it to show all the core and us/sy/io stats, but the numbers are substantially higher than with 'mpstat'... I know that my expectations can be unfound as well, but my gut feeling tells me 'top' is wrong! :-) The reason I need to know is because the process I am monitoring only guarantees quality of service with CPU usage "less than 80%" (however vague that sounds), and I need to know how much headroom I have left. It's a streaming server.

    Read the article

  • RK4 Bouncing a Ball

    - by Jonathan Dickinson
    I am trying to wrap my head around RK4. I decided to do the most basic 'ball with gravity that bounces' simulation. I have implemented the following integrator given Glenn Fiedler's tutorial: /// <summary> /// Represents physics state. /// </summary> public struct State { // Also used internally as derivative. // S: Position // D: Velocity. /// <summary> /// Gets or sets the Position. /// </summary> public Vector2 X; // S: Position // D: Acceleration. /// <summary> /// Gets or sets the Velocity. /// </summary> public Vector2 V; } /// <summary> /// Calculates the force given the specified state. /// </summary> /// <param name="state">The state.</param> /// <param name="t">The time.</param> /// <param name="acceleration">The value that should be updated with the acceleration.</param> public delegate void EulerIntegrator(ref State state, float t, ref Vector2 acceleration); /// <summary> /// Represents the RK4 Integrator. /// </summary> public static class RK4 { private const float OneSixth = 1.0f / 6.0f; private static void Evaluate(EulerIntegrator integrator, ref State initial, float t, float dt, ref State derivative, ref State output) { var state = new State(); // These are a premature optimization. I like premature optimization. // So let's not concentrate on that. state.X.X = initial.X.X + derivative.X.X * dt; state.X.Y = initial.X.Y + derivative.X.Y * dt; state.V.X = initial.V.X + derivative.V.X * dt; state.V.Y = initial.V.Y + derivative.V.Y * dt; output = new State(); output.X.X = state.V.X; output.X.Y = state.V.Y; integrator(ref state, t + dt, ref output.V); } /// <summary> /// Performs RK4 integration over the specified state. /// </summary> /// <param name="eulerIntegrator">The euler integrator.</param> /// <param name="state">The state.</param> /// <param name="t">The t.</param> /// <param name="dt">The dt.</param> public static void Integrate(EulerIntegrator eulerIntegrator, ref State state, float t, float dt) { var a = new State(); var b = new State(); var c = new State(); var d = new State(); Evaluate(eulerIntegrator, ref state, t, 0.0f, ref a, ref a); Evaluate(eulerIntegrator, ref state, t + dt * 0.5f, dt * 0.5f, ref a, ref b); Evaluate(eulerIntegrator, ref state, t + dt * 0.5f, dt * 0.5f, ref b, ref c); Evaluate(eulerIntegrator, ref state, t + dt, dt, ref c, ref d); a.X.X = OneSixth * (a.X.X + 2.0f * (b.X.X + c.X.X) + d.X.X); a.X.Y = OneSixth * (a.X.Y + 2.0f * (b.X.Y + c.X.Y) + d.X.Y); a.V.X = OneSixth * (a.V.X + 2.0f * (b.V.X + c.V.X) + d.V.X); a.V.Y = OneSixth * (a.V.Y + 2.0f * (b.V.Y + c.V.Y) + d.V.Y); state.X.X = state.X.X + a.X.X * dt; state.X.Y = state.X.Y + a.X.Y * dt; state.V.X = state.V.X + a.V.X * dt; state.V.Y = state.V.Y + a.V.Y * dt; } } After reading over the tutorial I noticed a few things that just seemed 'out' to me. Notably how the entire simulation revolves around t at 0 and state at 0 - considering that we are working out a curve over the duration it seems logical that RK4 wouldn't be able to handle this simple scenario. Never-the-less I forged on and wrote a very simple Euler integrator: static void Integrator(ref State state, float t, ref Vector2 acceleration) { if (state.X.Y > 100 && state.V.Y > 0) { // Bounce vertically. acceleration.Y = -state.V.Y * t; } else { acceleration.Y = 9.8f; } } I then ran the code against a simple fixed-time step loop and this is what I got: 0.05 0.20 0.44 0.78 1.23 1.76 ... 74.53 78.40 82.37 86.44 90.60 94.86 99.23 103.05 105.45 106.94 107.86 108.42 108.76 108.96 109.08 109.15 109.19 109.21 109.23 109.23 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 ... As I said, I was expecting it to break - however I am unsure of how to fix it. I am currently looking into keeping the previous state and time, and working from that - although at the same time I assume that will defeat the purpose of RK4. How would I get this simulation to print the expected results?

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • VS 2010 SQL Update for SQL Statement

    - by Mike Tucker
    Please bear with me as I'm just beginning to learn this stuff. I have a VS 2010 Web project up and I'm trying to understand how I can make a custom UpdateCommand (Because I chose to write my own SQL statement, I do not have the option for VS 2010 to auto generate an update command for me.) Problem is: I don't know what the UpdateCommand should look like. Here is my Select: SELECT * FROM Dbo.MainAsset, dbo.Model, dbo.Hardware WHERE MainAsset.device = Hardware.DeviceID AND MainAsset.model = Model.DeviceID Which, VS 2010 turns into: SELECT MainAsset.pk, MainAsset.img, MainAsset.device, MainAsset.model, MainAsset.os, MainAsset.asset, MainAsset.serial, MainAsset.inyear, MainAsset.expyear, MainAsset.site, MainAsset.room, MainAsset.teacher, MainAsset.FirstName, MainAsset.LastName, MainAsset.Notes, MainAsset.Dept, MainAsset.AccountingCode, Model.Model AS Hardware, Model.pk AS Model, Model.DeviceID, Hardware.Computer, Hardware.pk AS Expr3, Hardware.DeviceID AS Expr4 FROM MainAsset INNER JOIN Hardware ON MainAsset.device = Hardware.DeviceID INNER JOIN Model ON MainAsset.model = Model.DeviceID How would I approach updating one column, say "MainAsset.site" if that's changed in the Gridview DDL? Any help constructive help would be appreciated. Thank you.

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • Simple thruster like behaviour when rotating sprite

    - by ensamgud
    I'm prototyping some 2D game concepts with XNA and have added some basic keyboard inputs to control a triangle sprite. When I press key up the sprite accelerates in it's current facing direction, when I release the key it brakes down. For rotation, when I press left/right keys I rotate the sprite. Currently the sprite immedately changes direction when I rotate it. What I want is for it to keep moving in the same direction when I rotate, until I hit key up, adding thrust in whatever direction the sprite is pointing at. This would simulate thrusters on a classic space shooter like Asteroids. I'm adding an image to describe the behaviour I'm after and some code samples of how I'm doing things at the moment. This is my player struct, holding information of the sprite. public struct PlayerData { public Vector2 Position; // where to draw the sprite public Vector2 Direction; // travel direction of sprite public float Angle; // rotation of sprite public float Velocity; public float Acceleration; public float Decelleration; public float RotationAcceleration; public float RotationDecceleration; public float TopSpeed; public float Scale; } This is how I'm currently handling thrusting / braking (when pressing/releasing key up) (simplified, removed some bounds checking etc): player.Velocity += player.Acceleration * 0.1f; player.Velocity -= player.Acceleration * 0.1f; And when I rotate the sprite left and right: player.Angle -= player.RotationAcceleration * 0.1f; player.Angle += player.RotationAcceleration * 0.1f; This runs in the update loop, keeps the direction updated and updates the position: Vector2 up = new Vector2(0f, -1f); Matrix rotMatrix = Matrix.CreateRotationZ(player.Angle); player.Direction = Vector2.Transform(up, rotMatrix); player.Direction *= player.Velocity; player.Position += player.Direction; I am following along various beginner tutorials and haven't found any describing this, but I have tried some on my own without success. Do I need to change my velocity and acceleration fields to Vectors instead of floats to accomplish this type of movement? I realise my Angle and the Direction vector is currently tied together and I need to disconnect these somehow to be able to rotate freely without changing the direction of the movement, but I can't quite figure out how to do this while keeping the acceleration/decceleration functional. Would appreciate an explanation rather than pure code samples. Thanks,

    Read the article

  • Web App Server hardware question. Which configuration?

    - by JBeckton
    I am pricing some new servers and I am not sure which configuration to get. The server will be running some web applications for our company. Some of them are ASP.Net sites and some are ColdFusion. The OS will be Win Server 2008 Web or Standard Edition. Do I need 2 processors or will a single quad core handle it? Xeon multi core Hyperthreading or non Hyperthreading? I am going 64bit so I can go higher than 4 Gigs of Ram. I am shopping at Dell and there are so many options, i do not want to get too much hardware and not use half of it because that would be a waste of money and I do not want to get too little and have to ask for more money to upgrade it later.

    Read the article

  • Which hardware for using Vs.NET 2008/2010 decently ?

    - by stighy
    (Hope to be non OT) Hi, i'm a little exasperated about running vs.net 2008 on an acer aspire with an intel t2350. I know, this hardware is not the "last" and the best we can find on the market. So i'm thinking to buy a new notebook. For your experience, which type of processor i can buy ? I found, here in italy, acer notebook between 350-500 euros with t4400 and 2-3 gb ram. Is it enough to have a good "working experience" with vs (with good i intend not to wait 10-20 seconds when i switch from asp.net design to asp.net source code) ? Any answer is appreciated

    Read the article

  • What CPU hardware performance counter tool do you guys use?

    - by Hao Shen
    I just want to know the popular tool that I can use. Originally, I used Perfmon2. However, recently, I installed the lastest Ubuntu 12 on my Ivy Bridge i7 machine. Then there is some compilation error for the Pfmon. :< From here http://comments.gmane.org/gmane.comp.linux.perfmon2.devel/3255 , it seems that the Perfmon2 will not work after the kernel 2.6.30. So it suggests to use perf tool. I just want to confirm is there any popular application level performance counter monitor tool which can be used for the later kernel version?

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • Need to setup an office network, suggest some hardware?

    - by Yegor
    We have 6 windows workstations, spread out over a fairly large area. Need to share a DSL connection (upgrading to 100/100 mbit fiber in a few months) with these machines over a 1gbit network. Also need Wifi to be available for laptop use. Plan to add 2 rackmount servers for internal use as well. Can someone suggest a decent (preferably low cost) setup that will let me achieve the stuff mentioned above.

    Read the article

  • How to update the hard disk device drivers for a ghosted hard drive image so it can run on different hardware: Ultra ATA > SATA

    - by rism
    I've ghosted a Winxp machine from one laptop with Ultra ATA drive, and would like to set it up on another laptop as a multiboot option on another hard driver with a SATA drive. I can install the partition fine but if i make it active and try to boot it it blue screens. The blue screen is so fast i cant even read it, other than to make out it's saying "something", im picking probably hard drive as it goes through POST fine. So basically i would like to boot into my Win7 OS, and then somehow manipulate the XP partition to use updated drivers for the new hard drive/laptop so that i can then at least boot into the XP OS on the new machine and update all the other drivers in safe mode or whatever to get it to run. I assume someone is going to tell me to just do a fresh install, but that kinda defeats the purpose of ghosting at this point. There is a significant amount of personalisation, development setup on the XP machine that i would like to just transfer as is. As it stands ive invested minmal time in getting it to run, just a ghost and recovery and then a blue screen boot or two, so its still well worth it to me, time wise to try this way. Thanks.

    Read the article

  • How do I know which hardware components are compatible when building a computer?

    - by darkie15
    I am trying to setup my own new computer. I haven't done this before and hence needed guidance from all here. I can google out the results for steps to follow for setting up my machine, but I do not understand what components are compatible with each other? For example there must be specific models of motherboard that are compatible with Intel Core i5-2500K Sandy Bridge 3.3GHz. In general, if there is a website that can guide me on all compatible components, that would be great. If not, how would I be able to check compatibility?

    Read the article

  • How can I run the same Linux Installation on my hardware and in a virtual machine?

    - by LithMaster
    I've started some development that requires Linux (I'm currently on Ubuntu, but I may switch to Debian), but I still use Windows 7 for my day-to-day computing. I have already tried a dual-boot setup, but I've found that it is too cumbersome to switch between Linux and Windows. I'm wondering if it's possible to setup an installation of Linux (again, Ubuntu or Debian) on a partition of my hard drive that I can also run from Windows in a virtualized environment.

    Read the article

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • What hardware is at physical address 0x80000000 on powerpc New World Macintosh?

    - by tinkerer
    Open Firmware device tree gives no clue what device might decode at physical address 0x80000000 to 0x8008200 on a G4 New World Macintosh. The mmu has three adjacent Virtual=Real translations for that block. They are the only address translations reported between the top or physical dram at 20000000 and the start of the PCI bridges at f0000000. (A possible clue is that frame-buffer-addr is reported as 9c008000 by Open Firmware, and that is not in the reported translation table either). I believe the architecture has been around since about 1999.

    Read the article

  • Why do manufacturers not show all hardware power usage?

    - by Drew
    I find it slightly more difficult to build a computer when I do not know how much power is needed for a component. When selecting a power supply for a computer, it is difficult to know how large of one to get. You don't want to go too large for cost reasons and circuit reasons, but you don't want to go too low and not be able to properly use every component. For instance, a graphics card might say "Minimum of a 500 Watt power supply. (Minimum recommended power supply with +12 Volt current rating of 30 Amps.)" But it really needs 360W (12V * 30A). So why don't they just say "Uses 360W max and xxxW peak"? Processors, I have noticed are good at reporting their power usage, but aside from processors and sometimes graphics cards, power usage is easily found. What is the power consumed by the Blu-ray / DVD drives? By the HDDs/SSDs? By the Mobo? etc. Why are these questions not easily answered when building a machine?

    Read the article

  • What hardware combonation is better one with a i7-720QM processor and GeForce 310M graphics or one w

    - by Mason
    I am looking for a new laptop and the two i am deciding between is an Asus with Intel® Core™ i5-430M processor and NVIDIA GeForce GTS 360M graphics. Or a Toshiba with the Intel® Core™ i7-720QM processor and NVIDIA GeForce 310M graphics. I am looking for a computer to use for college and be able to play games on. I want to know what one I should get they are both the same price.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >