Search Results

Search found 30804 results on 1233 pages for 'hardware test'.

Page 20/1233 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • How to diagnose and fix Kernel Panic Fatal Machine Check error?

    - by 0x4a6f4672
    I have got a new Samsung Series 7 laptop with dual boot setup for Windows 8 and Ubuntu 12.10. A fine machine comparable to a Macbook Pro. The Ubuntu installation was quite a hassle, but with the help of Boot Repair finally it seemed to work. Or so I thought. Windows 8 starts fine, but if I want to start Ubuntu regularly the following Machine Check Exception error occurs, quite similar to this one [Hardware Error] CPU 1: Machine Check Exception: 5 Bank 6 [Hardware Error] RIP !inexact! 33 <00007fab2074598a> [Hardware Error] TSC 95b623464c ADDR fe400 MISC 3880000086 .. [similar messages for CPU 2,3 and 0] .. [Hardware Error] Machine Check: Processor context corrupt Kernel panic - not syncing: Fatal Machine Check Rebooting in 30 seconds Kernel panic does not sound good. Then it starts to reboot, and the second boot trial often works. Is it a Kernel or driver problem? The laptop has an Intel Core i7 processor. I already deactivated Hyperthreading in the BIOS, but it does not seem to help :-( I also disabled the Execute Disable Bit (EDB) flag in the BIOS. EDB is an Intel hardware-based security feature that can help reduce system exposure to viruses and malicious code. Since I disabled it, the error did occur less frequently, but it still appears occasionally :-( It seems to be the same error as described here and here. Maybe a Samsung specific Kernel problem? A similar error also happens on a Samsung Ultrabook Series 9 (which seems to be kernel bugs 49161 and 47121). At my Samsung Series 7, it still occurs for instance during booting on battery after "Checking battery state". Perhaps anyone else has an idea? These Kernel Panic errors are reallly annoying..

    Read the article

  • Kernel Panic Fatal Machine Check

    - by 0x4a6f4672
    I have got a new Samsung Series 7 laptop with dual boot setup for Windows 8 and Ubuntu 12.10. The Ubuntu installation was quite a hassle, but with the help of Boot Repair finally it seemed to work. Or so I thought. Windows 8 starts fine, but if I want to start Ubuntu regularly the following error occurs, quite similar to this one [Hardware Error] CPU 1: Machine Check Exception: 5 Bank 6 [Hardware Error] RIP !inexact! 33 <00007fab2074598a> [Hardware Error] TSC 95b623464c ADDR fe400 MISC 3880000086 .. [similar messages for CPU 2,3 and 0] .. [Hardware Error] Machine Check: Processor context corrupt Kernel panic - not syncing: Fatal Machine Check Rebooting in 30 seconds Kernel panic does not sound good. Then it starts to reboot, and the second boot trial often works. Is it a Kernel or driver problem? The laptop has an Intel Core i7 processor. I already deactivated Hyperthreading in the BIOS, but it does not seem to help :-(

    Read the article

  • When should I decline to make a requested change?

    - by reuscam
    I work on the software side of a company that provides custom hardware with software running on top of it. Often times the hardware is not engineered well. In those cases, I am often asked first to troubleshoot the problem - of course the symptom is always "your software crashed" or something along those lines. Just recently we had another one of these incidents where power on a USB line is not reliable, and it causes a USB device to fail. This causes a usability problem in one of our applications. I have been asked by upper management to handle this better - continually monitor the USB device, and if it disappears, then reboot, or try to reset it. Doing either of these is not guaranteed to fix anything. Ultimately, the real fix is to correct the reliability of the device from the hardware side. I could improve performance, but not to 100%, and of course I would be using my already limited time to bloat code and add yet another device monitoring thread. So with all that said, how do I make a good decision about when to say that this needs to be a hardware fix, and only a hardware fix? Can I approach this quantitatively, and come up with some sort of definitive yes/no test? I'm sure its not that easy.

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • Software RAID to hardware RAID, can it be done?

    - by gtaylor85
    Can it be done ... well. (For the record, I did not set this server up.) In my server there are 4 disks. 3 of them are in a software RAID5, and 1 has the OS installed. I want to buy a RAID controller, 4 new HDs and set up a hardware RAID5. If possible, I'd like to just image the current setup, and use it to build my new one. My questions are: Can I image a 3 disk RAID5 to 4 disk RAID5? Are there problems with this? What is considered best practice for your OS. To have it on a separate disk like it currently is, or to install it on the RAID5? Thank you. I can clarify anything. I'm not sure what other info might be pertinent.

    Read the article

  • How to figure out if hardware is slower than it should be?

    - by paldepind
    I'm pretty sure that my computer is slower than it should be and it has been that since a got it. For instance it lags a lot in Counter Strike Source. And yes I've installed all the newest drivers. And it doesn't matter what OS I uses. I've tried both Windows, Linux and FreeBSD and it's slow in all of them. So what could this be? Is there something wrong with the hardware? And if, then what could it be?

    Read the article

  • How do I configure hardware raid in a Poweredge 2850?

    - by Eric Fossum
    I just bought a Dell Poweredge 2850 from Craigslist and for the most part I'm happy with it's $300 price-tag, but I cannot figure out where to configure the embedded hardware raid... I've seen online you should hit <CTRL-M>, but while booting my box never says that. I have <CTRL-A> (I think) for an LSI Logic config, but that seems to just program SCSI and verify drives on my SCSI-A and SCSI-B. Anyone have a clue where this RAID config is?

    Read the article

  • Recommendation for hardware upgrade: thin clients? Or...?

    - by Alex C.
    I work for an animal shelter in Upstate New York. We have about 50 machines running XP Pro. They're connected to a Windows network with a domain. About half of these computers are used for nothing more than using two web-based apps -- one to keep track of our animals, the other to process credit cards. Having a full-blown desktop PC seems like overkill for this purpose. The PCs are three-to-five years old, and I'd like to come up with a plan to upgrade the hardware. Our donations are down (not surprising, given the economy), so cost is a big factor. Can people recommend some options? Some sort of thin client, maybe?

    Read the article

  • Partner WebCasts: EMEA Alliances and Channels Hardware Webinars, July 2012

    - by rituchhibber
    Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC! 3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library 3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe Facchetti EMEA Partner Business Development Manager, Oracle Hardware Sales Sasan Moaveni EMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • Oracle Key Vault - Hardware Security Modul für TDE und mehr

    - by Heinz-Wilhelm Fabry (DBA Community)
    Anfang August hat Oracle ein neues Produkt namens Oracle Key Vault (OKV) zum Einsatz freigegeben. Es handelt sich dabei um ein Hardware Security Modul (HSM) - also um ein Stück Hardware zum Speichern von Schlüsseln, Passwörtern und Dateien, die Schlüssel und Passwörter enthalten. Oracle Datenbank Installationen nutzen die zuletzt genannte Form des Speicherns von Passwörtern und Schlüsseln in Dateien für Oracle Advanced Security Transparent Data Encryption (TDE) und external password stores. Die Dateien werden in den Versionen 10 und 11 der Datenbank als Wallets bezeichnet, in der Version 12 als Keystores. Allerdings gibt es auch schon seit der Datenbankversion 11.2 beim Einsatz von TDE die Möglichkeit, statt der Wallets / Keystores HSMs einzusetzen. Da Oracle selbst kein eigenes HSM Produkt anbieten konnte, haben Unternehmenskunden dann auf Produkte anderer Anbieter zurückgegriffen. Das kann sich mit OKV nun ändern. Abhängig vom Bedrohungsszenario kann die Entscheidung gegen den Einsatz von Wallets / Keystores und für den Einsatz eines HSMs durchaus sinnvoll sein, denn ein HSM bietet mehr Sicherheit: Eine Betriebssystemdatei kann leichter gestohlen (kopiert) werden, als ein HSM, das in der Regel als speziell gesicherte Steckkarte in einem Rechner eingebaut ist oder als eigenes Gerät geschützt in einem Rechenzentrum steht. ein HSM kann anders als ein Wallet / Keystore systemübergreifend verwendet werden. Das erlaubt eine gemeinsame Nutzung von Schlüsseln - was wiederum zum Beispiel den Einsatz von TDE auf RAC Installationen perfekt unterstützt. ein HSM kann von mehreren Anwendungen genutzt werden. Das erleichtert das Konsolidieren und Verwalten von Passwörtern und Schlüsseln. Im aktuellen Tipp wird als Einführung in das neue Produkt dargestellt, wie OKV für TDE genutzt werden kann.

    Read the article

  • ATI Catalyst driver 12.8 is not using hardware acceleration on Precise

    - by Jack Wright
    I've been using Ubuntu and ATI Catalyst for years. On my clean install of Ubuntu 12.04 I've noticed that Catalyst 12.6 and then 12.8 are not actually using my HD5750 GPU for hardware acceleration - high CPU usage, zero GPU load. Everything installed correctly with no hassles, fglrxinfo and vainfo are correct as per this HowTo for Precise. I have an Ubuntu 10.04 with Catalyst 12.6 installation on the same hardware which does use the GPU - low CPU usage, high GPU load when transcodeing video files or playing video content. The VA-API drivers are not installed on the 10.04 build. They are not mentioned in this HowTo for Lucid. fgl_glxgears frame rates on Precise are a fifth of the rates on Lucid. LUCID jw@Kworld:~$ fgl_glxgears Using GLX_SGIX_pbuffer 16867 frames in 5.0 seconds = 3373.400 FPS 12523 frames in 5.0 seconds = 2504.600 FPS 13763 frames in 5.0 seconds = 2752.600 FPS PRECISE jw@NewWorld12:~$ fgl_glxgears Using GLX_SGIX_pbuffer 12905 frames in 5.0 seconds = 2581.000 FPS 3230 frames in 5.0 seconds = 646.000 FPS 517 frames in 5.0 seconds = 103.400 FPS 518 frames in 5.0 seconds = 103.600 FPS 6489 frames in 5.0 seconds = 1297.800 FPS This is glxgears running in fullscreen. In Lucid (10.04) I can't see the gears, they are spinning so fast, but in Precise (12.04) they are really sluggish. Has anyone else noticed a problem like this? Cheers, Jack.

    Read the article

  • Threading models when talking to hardware devices

    - by Fuzz
    When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device. As such, it is common for developers to spin up new threads to handle communications. It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application. Certainly it can be common to have two threads per device, one for reading and one for writing. I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community. Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread. Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency. Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible). Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business. If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • Test descriptions/name, say what the test is? or what it means when it fails?

    - by xenoterracide
    The API docs for Test::More::ok is ok($got eq $expected, $test_name); right now in one of my apps I have $test_name print what the test is testing. So for example in one of my tests I have set this to 'filename exists'. What I realized after I got a bug report recently, and realized that the only time I ever see this message is when the test is failing, if the test is failing that means the file doesn't exist. In your opinion, do you think these $test_name's should say what the test means if successful? what it means if it failed? or do you think it should say something else? please explain why?

    Read the article

  • test coverage reality

    - by iPhoneDeveloper
    I am NOT doing test driven development and I write my test classes after the actual code is written. In my current project I have a test coverage of(Line coverage) %70 for 3000 lines of Java code.(Using JUnit, Mockito and Sonar for testing) But while I feel actually I am not covering and catching %70 of the problems that can occur. So my question is in theory is that possible to have a %100 Line coverage but in reality it is meaningless because of low quality of the test code and maybe a %40 well written test code is much better than a bad %100 coverage? or we can always say line coverage more or less gives the percentage of all covered issues?

    Read the article

  • Can you recommend a good test plan template?

    - by Ethel Evans
    Can you recommend a good test plan template for an agile testing team? I know there are templates for testing on the web and have already looked at some found by search engines, but I could really use something lightweight and something that has already been tried by skilled testers and is known to work well. Many templates I've seen give me the feeling that writing test documents is expected to be a third of the work that those testers are doing, but my team really prefers to use less documentation and more actual test writing. We use a wiki for documentation, so an approach that lends itself to living documents would be great. My hope is that using a more structured approach to test planning will increase the usefulness of my test plan while reducing the effort to create it by allowing me to think about the tests, and not the format and structure of the plan. My workplace does not have something already on hand, so whatever I start doing might be adopted by the company.

    Read the article

  • How do I know what hardware to buy to meet my needs?

    - by Darth Android
    While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. When looking forward to build a computer, how do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. :)

    Read the article

  • PC power supply & normal range for voltages reported in BIOS hardware monitor?

    - by Chris W. Rea
    I'm trying to diagnose whether my computer has an ample power supply. Sometimes when I play a video-intensive game, both monitors lose the video signal, even though the computer remains on and sound playing. A theory I have is: the video card isn't getting sufficient power. I can't imagine it's overheating because the machine is well-ventilated and the video card isn't hot to touch when this happens. Anyway, in my PC's BIOS there's a Hardware Monitor page, and among other voltages reported (such as CPU, DRAM, South Bridge, etc.) I can see the following values: 3.3V 3.152V 5V 4.944V 12V 11.872V Are those the voltages used by peripherals? What voltage should I be referencing if I want to know what my video card (PCI Express) is consuming? What is the normal range of values reported for those? My values above appear to be under by approximately 4.5%, 1.1%, and 1.1% respectively. Is that cause for concern? How else should I be determining if my power supply is "right-sized" for my PC and video card, or am I perhaps barking up the wrong tree?

    Read the article

  • Important hardware components to avoid bottlenecks/improve speed on a laptop?

    - by joelhaus
    Looking for a powerful general use (including web development) laptop running Windows. Price points seem to be all over the place. Many less powerful machines are priced much higher than machines with better specs. How does one navigate this market? Are there any unpublished/under-publicized specs/bottlenecks you look for? Understanding that hardware improves over time, is there an efficient ratio that can be used (or something similar, like Windows Experience Index?) which will indicate how powerful a system is? Thanks in advance! P.S. Here is an example from a laptop released on September 17, 2010. Can anyone pick apart these specs? Is there missing information you would be looking for? OS: Win 7 Display: 16.4" LED backlit Processor: Intel Core i7-740QM, 6MB L3 Cache RAM: 6GB DDR3 1333MHz (8GB max.) Graphics: NVIDIA GeForce GT 425M (1 GB of dedicated DDR3) HDD: 500GB 7200RPM SATA Hard Drive Removable Disc: Blue-ray with DVD±R/RW Misc: webcam/mic/speakers/bluetooth (via Sony Vaio VPC-F137FX/B)

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • When buying hardware, what sites do you trust for information? [closed]

    - by Matt Dawdy
    I won't ask "what laptop should I buy" since the information is likely to change very quickly. However, I am about to buy a laptop, and I honestly don't know where to begin researching this based on my needs. I am hoping that asking about specific sites that do reviews/recommendations that this will still be on topic. I read the 6 guidelines for subjective questions and believe that this question scores favorably. I'm starting a new job in a few weeks, and they want to know what specific laptop to purchase. I'd like to get the most for their money and get a machine that will not need to be replaced in a year. When looking at a site like Dell, it's hard to get a full picture of the performance of a laptop. Does it work with a docking station, and if so, what kinds of video outs are on it? Will it work well when compiling several large projects in .Net? Has anyone had any issues with the machine getting flaky when dragging it from work to home and back all the time? etc. So, if people would enter in their preferred sites they use when researching hardware, and why they prefer that site (x is great for laptop comparisons, y is great for gaming machine reviews, etc) the I hope that this can be a question with valuable answers to others than just myself.

    Read the article

  • Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on?

    - by Kamil Kisiel
    We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.

    Read the article

  • XP Mode (Windows Virtual PC for Windows 7) no longer requires hardware virtualisation - hurrah !

    - by Liam Westley
    Windows Virtual PC (aka XP Mode) When XP Mode was released, it insisted on hardware virtualisation being present on your CPU and enabled in the BIOS.  Given that Windows Virtual PC was based on an improved Virtual PC 2007, which provided hardware virtualisation as a user selectable option, I did wonder why on earth Microsoft thought this was a good idea.  Not only do many people not have a CPU with hardware virtualisation support, some manufacturers don't provide a BIOS option to enable this setting, especially on laptops - yes Sony, Toshiba and Acer, I'm looking at you. Dumb and dumber This issue became a double whammy; not only was Microsoft a bit dumb on not supporting Windows Virtual PC without hardware virtualisation, your hardware manufacturer was also dumb in not supporting the option in the BIOS. Microsoft update to Windows Virtual PC Belatedly, Microsoft has seen the problem with this hardware virtualisation requirement and has now released a new version of Windows Virtual PC that works without hardware virtualisation.  This is really good news for those with older (or limited) CPUs and rubbish BIOS firmware. You can details of how to download the new versions of XP Mode here, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-no-hardware-virtualization-update-now-available-for-download.aspx And there is also an explanation of why the hardware virtualisation requirement was in place for previous releases, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-now-without-the-need-for-hardware-virtualization.aspx

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >