Search Results

Search found 1204 results on 49 pages for 'slava fomin ii'.

Page 40/49 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to recover my invisible HD again?

    - by pattulus
    I made this several times now, but this time something bad happened. What I did: I installed Windows 7 at a 32GB partition on my slot 2 HD in my MacPro. Windows 7 made a 105MB partition… I knew this before, but what I didn’t know was that this partition is now on my slot 4 HD. My home folder, my private videos and some other stuff are on this 1TB drive. What I found out so far: I’m currently logged in as another admin since my OS partition as well as the two other HD's aren't harmed. Disk Utility: … only shows the 105MB NTSF partition on this 1TB volume. It isn’t showing my old 1TB partition/ex-HD named "storehouse". Only the partition tab is telling me that there now is a 1TB empty free unpartitioned space. Data Rescue II: … is showing the Volume as it used to be with it's old Name "storehouse". A quick scan and a thorough scan both were done in 1 second which leds me to the conclusion that there's isn’t something deleted at all (» hope!). Data Rescue doesn’t even mention the damn "system reserved" partition. Drive Genius: … also shows the old partition and doesn’t mention the new one. But looking at the info it tells me under "content": FDisk_partition_scheme (instead of Apple_partition_scheme). Well D'oh…. Tech Tools: … doesn’t show the volume, otherwise I'd might have been tempted to press rebuild/repair. What to do next?? I think the best approach is to buy another 1TB HD and let Disk Warrior Clone my old one to it… just to be on the safe side. But what is the best thing to do after this… ???

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

  • PCI Tv Tuner not receiving signal

    - by C-dizzle
    I inherited an Angel II PCI Dual TV Tuner card, popped it into my computer's PCI slot, plugged in my coax cable, but am having trouble getting any type of signal to it. The computer I am using is Windows 7 Ultimate 32 bit, I know the card works because the computer recognizes it and installs the drivers, or at least I assume it works because of that. I'm trying to use Windows Media Center to watch/record tv. Here are a few things I have tried to get it working: Uninstalled/Reinstalled the newest drivers Tried hooking up an antenna to the coax input instead of my cable Instead of using cable splitter, went directly from wall output into card with coax cable Tried using different output on splitter in case the out port was bad I haven't tried a different coax cable yet, it should be fine since it's pretty new. Since this is my first time setting up a TV Tuner card, is there anything specific that I need to do with it? Is there any configuration that needs to be done on it? Do I need to have a digial receiver? I was getting pretty frustrated with it so I wanted to turn here to the experts, I'm sure someone can help me figure it out.

    Read the article

  • CPU operating temperature ranges

    - by osij2is
    I have an AMD Phenom II 960T with 2 cores unlocked for a total of 6 cores. I don't overclock at all. I have a Arctic Cooling ACALP64 Heatsink/Fan installed. I'm currently running ESXi 5.0 so I have to go into the BIOS to read the CPU temperatures, which at idle seem to be in the 71-74C range. To me, this is pretty high, but I cannot find any official temperature ranges that AMD says the CPU can work well within. There seems to be a lot of questions on superuser and numerous forums around CPU temperatures but no one seems to have a clear consensus as to what the manufacturer temperature ranges are for specific CPUs. I've tried searching through AMDs site to no avail. At this point, I'd be willing to shut off the 2 extra cores if it keeps the heat down but until I get some sort of tolerance or range for temperature, I have no idea if the CPU is being damaged or not. Can anyone point to a direct source, article, FAQ from AMD that specifically states their CPUs temperature range? Or are CPU temperature ranges so varying that there's no possible baseline? Am I being too paranoid about this? To me, anything over 65C is a bit much and if I'm in the low-mid 70s range with NO VMs running, what can I expect if I have several VMs running?

    Read the article

  • Ubuntu 12.04 partioning an external drive without lossing data

    - by Menelaos Perdikeas
    I have an Ubuntu 12.04 with an external 1.5T disk (just for data). It is /dev/sdc1 seen below: $ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/sda1 ext4 1451144932 27722584 1350794536 3% / udev devtmpfs 6199460 4 6199456 1% /dev tmpfs tmpfs 2482692 988 2481704 1% /run none tmpfs 5120 0 5120 0% /run/lock none tmpfs 6206724 284 6206440 1% /run/shm /dev/sdc1 fuseblk 1465135100 172507664 1292627436 12% /media/Elements The thing is I would like to implement this rsync-based backup strategy and I want to use my /dev/sdc1 external drive for that. Since the guide mentioned above recommends placing the backup directory in a separate partition I want to repartition the /dev/sdc1 external hard disk but retain existing data in a separate partition. E.g. split /dev/sdc1 into two partitions: (i) one to be used exclusively for the rsync-based backup and (ii) the other for the existing miscellaneous data. How should I go about partitioning with minimal risk to my existing data and what kind of filesystem do you recommend? I would prefer a console-based guide but unfortunately all the material I found on the web is oriented towards partitioning the main (bootable) disk and not an external fuseblk filesystem used only for passive data.

    Read the article

  • SRM 4 Test Fails with message for some VM : Error: A specified parameter was not correct.

    - by Setesh
    Here are my architecture : For the protected site 4 Host VSphere Enterprise Plus, each one with 2 HBAs FC connected to the switch fabric, connected to an EMC CX4-120 1 VCenter 1 SRM For the recovery site 2 Hosts Vsphere 4 1 Vcenter 1 SRM 1 CX-4-120 The CX4-120 is connected to the second CX4-120 with ISCSI and the MirrorView / Asynchronous. I synchronise for the time 6 Lun on a FC DAE, 2 on a S-ATA DAE I have allocated 30% of the amount synchronised LUN for the SNAPSHOT us, but I have allocated them only on my S-ATA II DAE. It does not make a problem, my snapshot are correctly active. All the installation is new (hardware and software), installed in January with the last files available in download. I have a strange problem, and it's random, sometimes when I run a test on my RP, some VMs have this error : Error: A specified parameter was not correct. I don't know where to look. Any help is appreciated... PS : I have checked on all the VMs, no Floppy disk or CD attached. PS2 : There is severals VMs with RDM and OCFS2 filesystems on it.

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • Motherboard Dying? AHCI Drive Init and boot loop intermittent failure

    - by Adam Heath
    My computer is now intermittently failing to boot up. For the last couple of days, when I turn it on it hangs on "AHCI Drive Init...", and when powered off and on again, it booted up fine. Today, it did the same but failed in a few other ways too, seemingly at random: Hangs on "AHCI Drive Init..." Boot loop (after "AHCI Drive Init..." appears for a split second (no drives listed)) Black screen (after "AHCI Drive Init..." appears for a split second, a black screen with all fans still running) The interesting part is that the above is not affected by what drives are connected, or what to. I have tried both disks, each disk individually and no disks (along with trying the primary and secondary SATA controllers), none of this has any effect on what happens. After about 20+ attempts of different combinations, it suddenly decided it would boot up into Windows, and I hadn't touched anything for about 2 cycles. Motherboard: Gigabyte GA-870A-USB3 Processor: Amd Phoenom II x6 1090T RAM: 8GB Corsair 1600 Primary Disk: Plextor 128GB SSD Secondary Disk: Western Digital Black 1TB OS: Windows 8.1 Is this my motherboard dying? Or could something else be the cause? Thanks!

    Read the article

  • Computer won't start after installing new video card

    - by Vercas
    So, 1 year and 340 days ago I bought a desktop computer. Since then, it has served me well. But lately, I wanted an upgrade, so I bought a new video card. I documented myself about the compatibility, and it is okay. So I opened the case, cleaned up that... dust elemental living inside of it. Unscrewed the plastic thingie on the outside to unscrew the old video card. Because of the stupid arrangement of the ports, I had to unscrew the motherboard to unplug it. So I unscrewed it, removed the old card, put in the new one, moved the motherboard back, screwed it back in, screwed the video card on the holder... thingie, and screwed the plastic thingie back in. Everything went smoothly, nothing had to be forced in/out. I connected the external power supply, closed the computer case, put the tower back in it's place and all the cables back in. When I pressed the power button, the LED turned... some color I can't distinguish. It stayed that way for a second, and then it went off. I tried a bunch of things, including permuting the external power supply arrangement (1 connection, 2 connections and no connections), with no success. And here are some of the specifications: Motherboard manufacturer: Asrock Processor: AMD Athlon II X2 3.0 GHz RAM: 2 x 2GB (had only 1 initially, bought the second plate a bit later) OLD video card: AMD Radeon HD 5450 NEW video card: Gigabyte nVidia GeForce GTX 650 GPU, 1GB GDDR5 128bit PCI-E, Dual-link DVI-Dx2 / HDMI / D-Sub Power supply: 450W + all the requirements I managed to find on the internet are met (+12V 18A or something) More specific information is stored... On that computer. If required, I may open the case again and read the stickers to find more specific information. I can also provide photos if necessary. Any ideas? Suggestions? Something? :|

    Read the article

  • My new SSD is causing issues. How can I solve them?

    - by Allan
    Computer specs 1 TB harddisc 120 GB 520intel SSD 8 GB DDR3 RAM Athlon Phenom II x64 955 3. 2ghz DK DFI Lanparty FX7900 M3H3 motherboard ASUS ATI RADEON HD6970 2 GB I have bought a new SSD (Intel 520, 120 GB), and wanted to use this as my system disc. I removed the other harddisc, and installed the SSD with the newest firmware. And then Windows 7 I updated Windows 7 with no problems and then put back my old harddrive. I formatted that old harddrive just to clean up at the same time... So at this stage everything was perfect. My new SSD was set as Master 0 Primary it boots on it and I have 1 TB emptyu harddrive I can use for whatever I want. So far no errors at all Now here is the problem, I installed a few games and everytime I tried to play the computer would say Windows must restart because DCOM server process launcher service terminated, or it says Windows must now restart because the Plug and Play service terminated unexpectedly Most commonly this error is caused by a rootkit virus, well I have tried formatting my entire computer, and running every antivirus I could find, so that shouldn't be it. I've also read somewhere it might happen when there are hardware issues. That on the otherhand would make sense, as I just put in a new SSD. I don't expect you to know this error. I haven't found anyone who knew it yet. maybe you can me guide through what might have gone wrong when I placed in the SSD? What have I checked regarding the SSD? It displays the right name when the computer starts up It has the newest firmware Did a 'sfc /scannow' which told me everything was fine I don't know what to do from here. Everything seems to work great with the drive. when I start playing games my computer crashes.

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • System randomly freezes yet mouse still moves, SSD out of reallocatable sectors, should I replace it?

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Kindle (client) for Mac--text search or highlighting/notes?

    - by doug
    just so we're clear, i'm talking about the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I downloaded it and bought a couple of Kindle-edition books to view on this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Installing a new Motherboard in a HP xw6200 case

    - by thing2k
    I have a HP xw6200 Workstation, that is rather long in the tooth, and with 2 physical CPUs, it is quite inefficient. So, the plan was to upgrade the internals. Nothing special: AMD Athlon II X4 640 ASUS M4A78KT-M LE (mATX) 2x 2GB DDR3 1333MHz RAM. 3p under my £150 budget The issues: The pin connector for the front panel isn't a good fit, but I can trim it to size. The PSU has a 8-Pin Power connector, unsurprisingly, the new board has a 4 pin socket. The pins do line up, but I would have to cut it in half to fit. Finally, due to the weight of heat-sinks, they are screwed directly into the case. It turns out that these screws also lock the motherboard in place. As to remove it, you remove the heat-sinks, slide the motherboard across and lift it out. I tested the new board for fit, and while it slots in fine, it's not secure. There is nowhere to screw the board down, it is just held in place with plastic standoffs. The only idea I had, was to wedging something between the side of the motherboard and part of the case. Any suggestions?

    Read the article

  • System randomly freezes yet mouse still moves

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • Strange boot problems on 6 month old setup

    - by Balefire
    I've already exhausted my knowledge on this one, so forgive me if this post is a bit long. I built a computer 6 months ago for my wife and it worked fine until last week. Then it randomly shut down and would lock up while trying to boot on the boot screen. I cleared cmos and it allowed me to do startup recovery, but it "failed to fix the issue" so I reinstalled windows on the HD (moving the old install to windows_old). It worked, so I started installing drivers again, but then when I restarted to finalize installations it locked up again. This time, I took the hard drive and hooked it up to my computer, backed up all her files, and then formatted the hard drive before reinstalling it. (again had to clear cmos to let me boot from disk) It installed windows, I installed drivers, and it worked for a few hours but then died during startup again. So, then I got a new HD, cleared cmos, and installed clean again, with the same result as the time before, it worked for a few hours, installed windows updates, then crashed on the 3rd or 4th time turning it on. I decided next to try reinstalling and then going online to see if there were any updates for the BIOS or drivers on the Motherboard, but now I can't get it to even bring up the boot menu, so now I'm just left wondering was it the motherboard, or is it the CPU, or the RAM? The problem was strangely intermittent so I thought it had to be a software issue, since a hardware issue would ALWAYS fail to boot, right? But now it seems to be a hardware issue, because it's not bringing up anything. Any suggestions? System: Windows 7 64-bit 970A-DS3 Gigabyte Motherboard AMD Phenom II X4 955 Deneb 3.2GHz Quad core Proc GeForce GT 430 (Fermi) 1GB Video Card 500W PSU 2 x G.SKILL Ripjaws X Series 4GB 240-Pin DDR3 1600 RAM

    Read the article

  • Booting Windows from different partition than system

    - by szamil
    I have bought an SSD disk, but my laptop (Dell Precision M6300) refuse to use it as a target disk for windows (AHCPI on/off, BIOS up-to-date). I can't exchange the disk unfortunately... But fortunately, I've managed to install windows using USB disk case. The problem is, that when I put that disk as my internal drive it can't boot. (Disk read error, Three Finger Salute ... ) So I tried with Linux (openSUSE), I manage to install it as well, but when I tried to boot GRUB from internal drive I get errors again. (Should I try GRUB2?) I figured out that I can boot into that internal hard drive's openSUSE system using small USB drive with GRUB, kernel and image on it. So, I just run GRUB from USB drive, it loads necessary stuff from the USB drive and then continues from the internal drive. I want to do the same with Windows. But GRUB (rootnoverify and chainloader +1) does not boot my windows on internal drive. The question: is there any chance to copy the critical windows' boot files into the USB drive, to make it possible to boot from that USB drive, but continue booting from internal (different in general) drive? The USB drive would became a system hardware key! ;-) Disk: Plextor M5S 128GB Sata III, laptop has Sata II, but it's compatible anyway, right?

    Read the article

  • Computer will freeze/ lock up after doing relatively stressful things

    - by GrowingCode247
    I'll first start off by saying that the issue GENERALLY doesn't occur unless I'm doing something remotely stressful for my computer. This issue used to occur whenever it felt it was necessary, however has not occurred completely randomly for a while now (thankfully) My computer's specs: CPU: AMD Phenom II X4 960T GPU: GeForce GTX 760 Memory: 16 GB RAM Resolution Used: 1680x1050, 59Hz (strange number for refresh rate?) res is highest for monitor Nvidia Driver version: 331.65 OS: Microsoft Windows 7 Ultimate (64-bit) Sometimes I will be able to go 2-3 games (about an hour, depending) and sometimes it will go maybe one game (20-30 minutes) and then my computer will run sluggishly and leave me unable to do much of anything. I can sometimes interact with programs at a very basic level (maximizing, minimizing), and I usually cannot close them in any way, not even through Task Manager. The highest temperature my GPU reaches is 76C, with the average being around 73C. During the time the temperatures are around 73C, my GPU's RAM usage is anywhere between 1250-1300 (out of 2GB). My CPU's temperature never goes over 60C, thankfully. The PSU should be fine. It's very mildly dusty but I feel as though that would not be causing this problem... I will clean it out as soon as everything else has been ruled out. Honestly I have no clue how to test the PSU for problems - same goes for my Motherboard. I cannot really think of what could be causing these freezes otherwise. Event Viewer details: EventID: 1 - VDS Basic Provider (I've no clue what this is) EventID: 3 - Kernel-EventTracing (Again, lost) EventID: 8003 - bowser (this seems fishy) and the one critical that I know others have been dealing with as I've browsed some other responses on the web: EventID: 41 - Kernel-Power any help to solve this problem would be GREATLY appreciated.

    Read the article

  • Apache downloads php files instead of running their source

    - by Devils Child
    I have just recently upgraded PHP 5.3 to 5.4 on my Debian Squeeze server. Now, instead of executing PHP files, Apache just downloads them, which is really bad. When I try to follow these steps, I get "broken packages" upon installing the libapache2-mod-php5 package. Also the answer tells me to add something to my httpd.conf, but it's empty. Question: How can I make apache execute php files again, instead of just passing them through as download? dpkg -l | grep php returns this rc libapache2-mod-php5 5.3.3-7+squeeze15 server-side, HTML-embedded scripting language (Apache 2 module) rc php5-cli 5.3.3-7+squeeze15 command-line interpreter for the php5 scripting language ii php5-common 5.4.15-1~dotdeb.2 Common files for packages built from the php5 source rc php5-gd 5.3.3-7+squeeze15 GD module for php5 rc php5-mcrypt 5.3.3-7+squeeze15 MCrypt module for php5 rc php5-mysql 5.3.3-7+squeeze15 MySQL module for php5 rc php5-suhosin 0.9.32.1-1 advanced protection module for php5 rc phpmyadmin 4:3.3.7-7 MySQL web administration tool And apt-get install libapache2-mod-php5 produces this error The following packages have unmet dependencies: libapache2-mod-php5 : Depends: libdb5.1 but it is not installable Depends: libssl1.0.0 (>= 1.0.0) but it is not installable Depends: libxml2 (>= 2.8.0) but 2.7.8.dfsg-2+squeeze7 is to be installed Recommends: php5-cli but it is not going to be installed E: Broken packages

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • computer randomly restarting. both in game and out of game

    - by eric
    first my specs are. AMD Phenom II x4 955 processor 3.2ghz 20gb ddr3 ram 4Gb Nvidia Geforce GTX 770 850w Corsair tx850w psu Gigabyte ud3 mobo Windows 7 professional I recently uprgraded my vid card to gtx770 and upgraded my psu to the 850w thats in it now. i did a reformat with the installation of the new gpu and psu and started fresh and only have a couple programs installed (diablo3, nvidia control panel, wow, and steam). all drivers are up to date and everything is hooked up correctly. the problem is it will randomly shut down. no blue screen. just turns itself straight off and reboots after a couple seconds. occasionally i will have to unplug the power cable from the psu for a few minutes then reconnect and it will start up. it seems pretty random. sometimes it does it when my pc is just sitting there on the home screen. and sometimes it does it during games. and sometimes it doesnt do it for days at a time. i noticed the psu felt hot so i put an extra fan blowing straight onto both the psu and gpu and neither feel overly hot after it shuts down now. could it just be that it is a psu problem. the psu was taken from another machine but wasnt having this problem in that machine. i have seen a few articles online about gtx770 doing the same thing. but i havent found any answers or solutions. any help will be appreciated. im sure the 850w is enough to power my machine, im just stumped and ran out of ideas to fix it. i have even returned the video card for another thinking it might have been an issue with that particular card, but still gettin the same problem.

    Read the article

  • moving from WinXP to WinServer in VmWare

    - by Alex
    I have a Vmware machine for.Net application testing. Current setup: Host OS: win7 Guest OS: Right now the guest OS is Win Xp Pro x64, which runs great with just 1 gigabyte of RAM and 10 gigs of disk space. * This part can be skipped * As I said, there was a program that I needed to test, but unfortunately, by default, Vmware installs crappy display drivers(called SVGA II) on XP machines and there is NO way to upgrade them! This resulted in my program's error (the program used SlimDX (DirectX wrapper) to do some stuff..). Eventually I found out that display drivers most certainly is the problem. For example, Windows 7 virtual machine uses SVGA 3D drivers and I have NO problems running my SlimDX-based program. Now, regarding Windows Server 2008! Apparently, WDDM driver is supported by WS2008, which means that I'll be able to install SVGA 3D and to test my DX apps. * end of skip * The questions are: Will WS2008 be as smooth with just 1 gig of RAM just like Win XP was? Will 10 gigs of HDD be enough? Or the server requires more? Will I be able to install .Net ver. 4 on WS2008? Are there any limitations that I need to be aware of as a .Net programmer? EDIT: I was hoping that WS2008 is XP-based, not Vista-vased/W7-based. In comparison, W7 virtual machine with 2 gigs of RAM and 2 proc cores nearly kills my Host OS. Whereas, WinXp runs extremely fast even with 1 core and 1 gig of RAM. That's the main reason why I want to try WS2008..

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >