Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 605/714 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • SATA Blu-Ray drive not detected via USB adapter

    - by LTR
    I'm using an SATA blu-ray drive (Lite-On IHOS104), connected via a USB adapter to my Windows 7 (32-bit) notebook. The first time I plugged the adapter in, it worked perfectly. The drive appeared in Explorer, and my player software was able to play an encrypted Blu-Ray disc just fine. After the first reboot, it never worked again. Now it shows up as "removable media" with a size of 0 bytes, irregardless of whether a disk is inserted or not. The drive does not show up under the green "eject USB device" icon in the taskbar. The adapter is powered using a separate power adapter. I have tried: disconnecting all other USB devices testing with a DVD instead of a Blu-Ray disc letting Windows search for updated drivers for the drive - there are none using the same drive and adapter on another Win7 (but x64) computer. They work perfectly. using other USB ports on the same notebook - tried them all. using a different SATA/USB-adapter What else can I do to diagnose this problem?

    Read the article

  • SQL 2008 SP1 crashing almost daily

    - by matijake
    Hey, almost every day our new DB crashes. It is virtual server residing on same hardware as 5 other servers, two of them beeing identical MS SQL2008sp1 and two Oracle 11g's so I can pretty much rule out hardware issues. Server has dedicated local LUN, 4vCPU and 8GB memory with 2GB windows swap file. It runs 4 instances. Primary instance is limited to 5GB memory and paralelism set to 4 running on MS SQL 2008 SP1 @ Windows Server 2008 Enterprise R2 x64. Only that primary instance is crashing. After it crashes nothing can connect to it, it's even impossible to shut it down through service manager. What I found in logs is: ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\SQLDump0081.txt SqlDumpExceptionHandler: Process 4788 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server i s terminating this process.     Whole log can be seen at: http://kabl.org/files/SQLDump0081.txt second crash log made second later at: http://kabl.org/files/SQLDump0082.txt I have analyzed mini crashdump with Microsoft tools, but no promising results. If it can help, here it is: http://kabl.org/files/SQLDump0081.mdmp Any ideas are greatly welcome, since it is becoming quite a pain in the ass to restart server almost every day :) Regrads, -Matija

    Read the article

  • computer build for extreme tabbed browsing

    - by David Berger
    I'm interested in building or buying a task-specific computer for my brother. His requirements are ridiculously simple: the machine has to be able to wait in hundreds of web-based virtual waiting rooms at once and not crash. To be competitive, he needs to be able to enter the waiting rooms an dauto-refresh them faster. My question is, what priority do I give the different specs? My initial surmise is this: Connection speed (nothing to do with my build, but I kind of think this will be more beneficial than anything I build for him) Memory size -- I don't usually see firefox taking up more than a gig, even when heavily tabbed, but I think one gig for the operating system and two gigs for the browser are necessary. Processor speed -- I think the processor will affect performance, but even something out of date will do what he needs Memory speed/RAM bus -- I doubt this will matter much, but it seems just on this side of irrelevant. Everything else is a non-issue for him. Does this seem to stack up correctly? Also, since he's looking to stay on the cheaper side, and I might end up recommending a refurb to him, is there anything particularly egregious that Vista would do if it came pre-installed? If I build it myself, I'll give him linux, but if I have it shipped to him, I'm not sure I could walk him through the install process for linux, but I probably could walk him through the process to upgrade to Windows 7, if it were somehow worth it.

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • Unable to write DVD-R(Blank DVD's)

    - by FrozenKing
    I have a problem in dvd drive i.e. It can read CD/DVD and can write CD and all CD/DVD-RW but cannot write DVD DVD drive model is SH-S203B Samsung; I also have a log file created by nero burning rom 11. Actually the fact is no Blank DVD's are being read in my dvd drive only previously written dvd's can be read! Is this the problem of OS or should I try cleaning the dvd drive or my DVD drive is 4yrs old so is it going to spoil now, since it is showing this type of symptoms! OS = WinXP AV = KIS 2012 DVD Drive = Samsung SH-S203B (Also tried latest firmware and downgrade versions also) IA32 Nero Version: 11.2.4.100 Internal Version: 11,2,4,100 Recorder: <TSSTcorp CDDVDW SH-S203B>Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 Drive buffer : 2048kB Bus Type : via Inquiry data CD-ROM: <TSSTcorp CDDVDW SH-S203B >Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 18:58:10 #37 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 00 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:10 #38 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 0 to 14 <Padding> 18:58:19 #39 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 10 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:19 #40 SectorVerify 21 File Cdrdrv.cpp, Line 12057 Read error at sector 15 <Virtual Multisession Info> 18:58:19 #41 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 16 to 18 <Volume Structure Descriptor Sequence> 18:58:28 #42 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 20 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000

    Read the article

  • How can I set up a local nameserver and modify DNS zones on it?

    - by Joe Hopfgartner
    This is a follow up to this question. I am having an issue with a Router that doesn't support hairpinning properly. See the link above for details. Now I want to set up a local DNS server that Hosts in our LAN can use to resolve public Hostnames (usual webbrowsing... ). Additionally I want to modify certain zones. In our LAN we have some servers serving resources that are not available in our public dns zone. We always have to configure our local LMHost files accordingly. For example we have a staging installation with a new feature running on a local Webserver, and we cannot access it with the IP directly because the website runs in a named virtual host container, we have to configure LMHost file to point some domain to the local IP address. And now we have also the Hair pinning issue. So my question is: What software can I use? Will bind do the job? I just need to insert some A entries into the zone. As easy as possible. We have local Linux/Ubuntu servers.

    Read the article

  • Simple options for port forwarding to a different port?

    - by Nick
    I have three network printers at our local office, all of which listen on port 9100. Non of them offer the option of changing the listening port. We have a single public static IP address, and access to our main network is through a Linksys WRT-54G. We need to be able to print to these printers from outside the office. The problem is, with the 54G, I can only forward a port to the SAME port on a particular IP address. What I really need though is a way to forward to an ip address and a DIFFERENT port. I need to do this: In port Destination 9100 192.168.1.1 : 9100 9101 192.168.1.2 : 9100 9102 192.168.1.3 : 9100 So I'm looking for options. I could setup an old computer with two network cards and IPtables I suppose, but that seems like a lot of overhead for something relatively simple. Is there a way a virtual machine (read: one network card) could do the advanced port forwarding? Where I forward all traffic to it, and it forwards it on to the right printer? Or what about those mini Linux distros that replace the WRT-54G's firmware? Do any of those support what I need "out of the box"? I have a spare WRT- could I make it an IP tables router? Recommendations for mini distros? Or is there an off-the-shelf product that does this (cheap/local preferred)? Any advice / options appreciated. Thanks!

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by bombastic
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL STACK ERROR FROM MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running a Virtual Private Server , so no HOSTING SITE, using NGINX too

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Linux/Unix in Windows

    - by Dmitriy Nagirnyak
    Hi, What would be the best way to get the full-blown Unix/Linux bash inside Windows? I don't mean the Virtual Machine, but rather only the terminal with mounted NTFS drives. This way I could use the power of Unix/Linux still being on Windows. The things I want to be able to do from the terminal: Package management (apt-get in Debian). SSH. File operations (including grub and similar). Run a web server (Apache, nginx) for testing purposes. Easy to use: start terminal - Linux is on, end terminal - Linux is shut down. Would be nice to be able to copy-paste from Windows into Terminal and vice versa. This really feels like a separate OS and I realize that VM would, probably, be the best thing. But I guess it should be possible to have a lighter installation. THE NOTE: I cannot just use Linux because of I still need to do development on Windows. Also I am a Linux noobie - just getting started with it so sorry if asking something obvious/stupid. Thanks, Dmitriy.

    Read the article

  • Make isolinux 4.0.3 chainload itself

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • During Vista Repair - No operating system is listed.

    - by Jack Marchetti
    After a Windows update, my brother's Gateway computer loads to the "Step 3 of 3: 0%" and reboots. Safe Mode does not work. I placed a Vista DVD in the drive, and re-booted. (Note, this is my Vista DVD, not the Recovery/System disc that would come with a computer. Gateway does not give you CD's anymore. I believe they store recovery on a partition, but that partition has been wiped out). I chose "Repair Your Computer" I get a dialog box, but no operating system is listed. I'm then prompted to "Load Drivers". What drivers am I supposed to be loading here and where from? I placed a CD in the drive to "load drivers" but I don't see my DVD drive listed. All I saw where X:/Sources along with several Removable Media slots that were empty. On another screen I tried Startup Repair, which didn't do anything. I attempted to use System Restore - but it doesn't detect the hard drive. I'm guessing that I'm missing some sort of SATA driver and that is why the hard disk is not being found. Any ideas on this?

    Read the article

  • How to calculate RAM value on performance per dollar spent

    - by Stucko
    Hi, I'm trying to make decisions on buying a new PC. I have most specifications (processor/graphic card/hard disk) pin-downed except for RAM. I am wondering what is the best RAM configuration for the amount of money I'm spending. As the question of best is subjective, I'd like to know how would I calculate the value of RAM sticks sold. 1.(sample)The value of amount of memory: 1) CORSAIR PC1333 D3 2GB = costs $80 2) CORSAIR PC1333 D3 4GB = costs $190 would it be better to buy 2 of item 1) instead of 1 of item 2) ?? Although I would normally choose to have 1 of 2) as the difference is only (190-(80*2)) = 30 as I would save 1 DIMM slot, What I need is the value per amount: 1) 80/ 2 = $40 per 1GB 2) 190/ 4 = $47.5 per 1GB 2. The value of frequency: 1) CORSAIR PC1333 4GB = costs 190 2) CORSAIR PC1600C7 4GB = costs 325 Im not even sure of the denominator ... $ per 1 ghz speed? 3. The value of latency: 1) CORSAIR CMP1600C8 8-8-8-24 2GBx3 (triple channel) = costs 589 2) CORSAIR CMP1600C7D 7-7-7-20 2GBx3 (triple channel) = costs 880 Im not even sure of the denominator ... $ per 1 ghz speed? Just for your information i'd like to get the best out of the money im going to spend to put on a 6 DIMM slot i7core motherboard.

    Read the article

  • Debian - Secure system from current administrator

    - by netadmin
    Hello, I am the Network and Systems Administrator in an organization of just under 500 users. We have a number of Windows Servers, and that is certainly my area of expertise. We also have a very small handful of Debian servers. We are about to terminate the sysadmin of these Debian systems. Short of powering down the systems, I would like to know how I can ensure that the previous admin does not have control of these systems in the future, at least until we hire a replacement linux sysadmin. I have physical/virtual-console access to each of the systems, so I can reboot them in various user-modes. I just don't know what to do. Please assume that I do not currently have root access to all of these systems (an oversight on my part that I now recognize.) I have some experience in Linux, and use it on my desktop on a daily basis, but I must admit that I am a competent user of linux, not a systems admin. I have no fear of the command line however.... Is there a list of steps that one should take to "secure" a system from somebody else? Again, I assure you that this is legit, I am re-taking control of my employer's systems, at the request of my employer. I hope to not have to shut the systems down permanently and still be reasonably certain that they are secure. Thanks for your time.

    Read the article

  • Monitoring multiple sites on a single server using OpsView

    - by Kev
    We have several web servers. On each of these servers there can be ~250 web sites. I need to add a HTTP check for each site on each server. Each site has a reserved host header that we know can always be resolved in the format of: w10000.hostchecks.mycompany.com w10020.hostchecks.mycompany.com w11992.hostchecks.mycompany.com ..and so on.. What I want is for there to be a master ping check on the web server's main IP address and then separate HTTP checks for each of the sites on the server. If the master ping test fails then I want the HTTP tests to cease until the master ping check goes OK. I had a stab at this and tried do the following: Create a parent host that does a ping check on the server's main ip address (e.g. server is named WEB0001). For each of the sites that reside on WEB0001: Create a separate Host with a Primary Hostname of wXXXXX.hostchecks.mycompany.com Make WEB0001 the parent host Add a monitor (HTTP check to a special url that is mapped into each site using a virtual directory: H- $HOSTADDRESS$ -u /__hostcheck/IsAlive.aspx -w 5 -c 10 -p 80 However I find that if I down the parent server (WEB0001) the http checks seem to continue. Am I going about this completely the wrong way?

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >