Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 197/230 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

  • Datacenter IP Addressing and DNS Management

    - by user65248
    Hello everyone Basically we are setting up a small Datacenter, about 300 amps power and max 50 racks, Im saying these coz I wanna u imagine the size and requirements, I have studied networking mostly Microsoft and Windows based systems , but I cant get how the IP addressing and DNS management and configuration works in a Datacenter , and unfortunately I have to setup everything by myself but defe we will have some staff to do some job. Now my questions Datacenter IP Addressing Suppose we have got a block of 200 IP addresses from our ISP, How can I manage these block of IP addresses, is there any software out there to simplify this I heard that using DHCP server in a datacenter is not recommended, otherwise what would u say about MS DHCL server ofc considering we need to have backup serversin case of failur How can I assign a block of IPs to a specific rack, I know with different software and management its different but Im asking how it is done normally IP addresses are exposed to the whole network, what if a customer try to use an IP address and is not assigned to their server or rack , how can I prevent this or how can I track the IP usage DNS Management Im goin to setup at least two servers for our DNS servers, I know nothing about Datacenter DNS system, but I have configured DNS server in normal networks and also for webservers, Now I wanna know What exactly needs to be done for a DNS in a datacenter that is not done for normal networks. How can I configure PTR records why cant I configure PTR records on my webserver side DNS server and it should be done on datacenter DNS server , I mean what is the difference in DC DNS servers that allow us to to so , I know the question is very silly and simple but Im confused Is there any software outthere to allow doing the whole thing, I mean automatically add records to the DNS and also managin IP addresses !? Thanks in advance

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Moving Farm to co-location hosting - network settings requirements

    - by Saariko
    I am moving my farm (2 Dell's R620) to a co-location hosting service. I am trying to figure out the secure way to have my network settings The requirements are: VM1 is the working HOST, includes: esxi 5.1, vSphere, 4 clients (w2008r2 all) VM2 has esxi 5.1 installed, and a single machine with Veeam Backup and copy 6.5 - keeping a copy of VM1 clients on the VM2 internal storage (this solution is due to a very small budget - in case of failure on Host 1 - can redirect IP's) Only 2 VM clients require network address and access from the WWAN - ISP provides IP's range for them (with Gateway and DNS) I need connection to the iDrac's from my office (option to create a VPN-SSL tunnel) Connection to the vSphere appliances I want to be able to RDP to the VM clients The current configuration is that each host has the iDrac dedicated nic connected , and another (NIC #1) connected - with a static IP on 192.168.3.x The iDrac's have a static IP from the same network range (19.168.3.x) It will look something like this: My thoughts: On NIC#2 of both hosts I will connected a crossed cable I will give each VM clients that needs internet access a 2ndry VM network with the assigned IP from the ISP open only to web - can not access from the My Question: Should I give IP's (external) to the machines who DO NOT require WWAN Access? - I can't see a way to RDP to them directly if not. Should I use the crossed cable? or just plug NIC #2 to the switch? Will this setup even work? What do I need to verify? What Virtual nic's and/or switches should I create on the Hosts?

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How can I set up Redmine => Active Directory authentication?

    - by Chris R
    First, I'm not an AD admin on site, but my manager has asked me to try to get my personal Redmine installation to integrate with ActiveDirectory in order to test-drive it for a larger-scale rollout. Our AD server is at host:port ims.example.com:389 and I have a user IMS/me. Right now, I also have a user me in Redmine using local authentication. I have created an ActiveDirectory LDAP authentication method in RedMine with the following parameters: Host: ims.example.com Port: 389 Base DN: cn=Users,dc=ims,dc=example,dc=com On-The-Fly User Creation: YES Login: sAMAccountName Firstname: givenName Lastname: sN Email: mail Testing this connection works just fine. I have, however, not successfully authenticated with it. I've created a backup admin user so that I can get back in to the me account if I break things, and then I've tried changing me to use the ActiveDirectory credentials. However, once I do, nothing works to log in. I have tried all of these login name options: me IMS/me IMS\me I've used my known Domain password, but no joy. So, what setting do I have wrong, or what information do I need to acquire in order to make this work?

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Missing Data on VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • How is Apache still working?

    - by PJ
    Recently, I decided to set up a local development environment for my work projects. I'm a PHP developer, with just enough knowledge of Linux and Apache to break things mightily. To get the local environment looking like my work environment, I had to upgrade PHP. When I did, Apache wouldn't restart. I decided I wanted to start fresh (this is where things went wrong) and that I'd reinstall Apache and PHP using MacPorts. So, I went through and tried to delete all the Apache files. Yup. I ran locate apache2 and deleted any folders that looked important. (I know, I know) Then I did a /usr/libexec/locate.updatedb to make sure everything was up to date. I even restarted my machine, just to make sure. The issue is, http://localhost still works. As does an alias I set up, http://butler. Shouldn't they not work? Now that I'm this far in, are there any tips for how to completely remove Apache so I can start over? Worst case, I have a timemachine backup, so I can always just restore that... Thanks in advance.

    Read the article

  • Mysql cluster strange behaviour

    - by Champion
    Hi Guys, I have 2 mysql clusters on two different servers with management node on each of them. It went down someway. I ran following commands to start the cluster: Start the management node on srv1: srv1: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the management node on srv2: srv2: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the ndbd nodes on srv1: srv1: mysqlc/bin/ndbd --initial -c localhost:1186 Start the ndbd nodes on srv2: srv2: mysqlc/bin/ndbd --initial -c localhost:1186 Start mysqld server on srv1: srv1: mysqlc/bin/mysqld --defaults-file=my_cluster/conf/my.cnf --user=root & and here is the problem. mysql server not loading the data. Only database names are present. All the tables which are ENGINE=ndbcluster are not being loaded. Tables with ENGINE=myisam are being loaded. Backup scripts helped me load the data. But this way I can't use cluster setup. Similar issue appeared when i started srv2. How can I resolve this issue ?

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • Acer recovering windows vista

    - by Charlie Pigarelli
    My computer history is very long even of my computer has 4 years of life. An year ago I installed Windows 7 on this acer m1610 that had Vista before. My technic left me 2 recovery disc for "acer vista" before updating it to Windows 7. Then the computer had some trouble. The graphic card broke and we decided to use another computer. Yesterday I had the great idea to fuse the two computer to have a better one... So I moved the graphic card of the latter computer to the acer and everything gone well. Then the trouble of speed, it had before, come back. So I decided to reinstall the very first Windows: Vista back again. I booted the computer with those 2 DVD-R my technic left me and at the end of the process it asked me to insert "the backup cd number one or the system disk". I found 2 original Acer "Blank Recovery Disc" DVD-R and tried with those: rejected. Tried with empty DVD-+R: rejected. I tried with CDs: rejected. I don't have any system disc with me. Except for those 2 DVD-R my technic left me. What am I supposed to do now? I even tried with these fantomatic alt+f9/f10 that should start the recovery without any disc... But nothing happened. PS: the installation cannot complete if I do not insert the right disk. (The recovery disc uses Acer eRecovery Management as recovery software.

    Read the article

  • Will restoring a cPanel account delete irrelevant mysql databases?

    - by user54625
    Long story short: I have many accounts in WHM, many domains. However, most of the MySQL databases for all the sites were created under one user, "admin" (let's say it's admin.com). I recently moved the admin.com account to another web server , and now I need to move it back. I would like to restore this account from a backup made by that web server. However, it only has 3 databases attached to that domain, while the server I'm moving it to has those 3 databases (old DBs that need to be replaced), PLUS all the other databases which are used by other accounts on the server. My question: if I restore the admin.com account, will it delete all the other databases on the account, or will it only replace the 3 DBs it has? I can't take any chances here to delete all those other DBs... If it WOULD delete all those other accounts, how would I go about restoring this account without losing the other data? An easy to way to resolve this situation would be to move the mysql owners to the proper accounts, but alas it doesn't seem that there is a way to do that via WHM.

    Read the article

  • VMware and Windows Activation

    - by Peter M
    Yesterday I installed Slysoft's Virtual CloneDrive in order to mount an iso for some software installation on my host system (XP Pro SP3) This morning I fired up VMware and made a linked clone of an existing XP vm in order to do some software testing. This is the sort of thing that I do all the time, and the base XP vm that I clone was activated a long long time ago. The surprise today was that the newly cloned vm was no longer activated and XP cited major changes in hardware as the reason. I repeated the test with a full clone of the base system and got the same message. I then started up my base vm and it seemed to be activated, yet another vm (which I fully cloned from the base vm a long time ago) now started reporting that XP was not activated. At this point I guessed that Virtual DriveClone might have been the source of my hardware differences so I uninstalled it and rebooted. After this I made a new linked clone and full clone of the base vm and XP did not complain about not being activated. So I seem to be back to where I was before installing Virtual DriveClone with the exception that that one particular XP vm continues to complain about activation (even though 4 or 5 other XP vm's are fat and happy) Now to my questions: Why would adding Virtual CloneDrive to the host system affect XP activation on the vm's? From their point of view I would have thought that the environment had not changed as I had not enabled any new hard drives in their systems. Or is adding a hard drive to the host system enough to upset XP activation? Since this event, one of my fully cloned vm's is still reporting that XP is not activated even though I have removed Virtual CloneDrive. Is there anyway to convince XP that it is on the same system as yesterday? Or are my only options to do an activation or restore the vm from a previous backup?

    Read the article

  • Programmer configuring a new network

    - by David Lively
    I'm in the process of expanding my home network from a couple of laptops on a wireless Verizon FiOS router to include: Linksys 24-port switch Cisco Pix 515 Cisco 3640 router One new development desktop and three new machines to act as a db server, web server and a backup system. My company is moving offices and we've decommissioned some older hardware, which I was able to pick up for the cost of the labor to move it home from the office. The benefits to working with dedicated web and db servers are very valuable to me. I know very little about network topology, other than that everything plugs into the switch, which then plugs into the cheap Verizon router. (Verizon provides a coax connection that the router must translate into Ethernet before I can use it with any of this equipment). Questions: What is the recommended topology for this equipment? Verizon router - Pix - 3600 - switch? Is the 3600 even necessary or desirable? The Verizon router has one WAN port and 4 client ports, all 10/100. Is there any performance benefit at all to wiring multiple connections from the verizon router to the switch, assuming I don't use the Pix? Should I use the Pix? Software firewalls are a pain, and seem silly if I have a device like this lying around. Anything else I should know? Am I wasting my time with this? I also obtained a 7 foot rack, shelves, patch panels, UPS, patch panels, etc, which are going into a conveniently air conditioned closet. All constructive advice appreciated.

    Read the article

  • How can adding a server to a domain cause Remote Desktop to stop working?

    - by Adrian Grigore
    I have two dedicated with Windows 2008 R2 servers which I am using for Web hosting. One Server A is a domain controller, Server B should simply be added to the domain controlled by Server A. So I RDP'd into Server B and changed the system settings so that Server B is part of that domain. I entered my domain admin credentials, was welcomed to the domain and asked to reboot the server. So far everything seemed to work smoothly After rebooting, I could not open an RDP connection to Server B anymore: Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Make sure the remote computer is turned on and connected to the network, and that remote access is enabled. I restored an older backup of Server B and switched off the firewall before adding the server to my domain. But the problem reoccurred just the same. What could be the reason for this? The domain is brandnew and I did not change any of the default settings. Could this be some kind of domain-wide default policy that shuts down RDP on any domain clients? Or perhaps it has to do with the fact that Server B is virtual? Thanks for your help, Adrian

    Read the article

  • 2000 Server, User can't logon

    - by Mike I
    I hope you can help me. I recently upgraded a workstation at my office (to a whole new machine) and ran into a pretty serious problem. Friday until 5:00 PM, I could access my mail on 2000 Exchange server. When I shut the old workstation down and put in the new workstation, I tried to set up an account. When I put the server name in appropriate field and typed my username and hit check names, my username does not come up. So to troubleshoot, (It also is a SMB server) I try to logon to my file share. (My local credentials are the same as server credientials of user account) When I try to logon to share, I just get the Username/Password screen (Never had gotten that before since credentials are the same) Again, in troubleshooting mode, I try to log on to my user from another workstation. Still can't authenticate via my user. Every other user can authenticate and load up their shares/mailboxes. I have restored Exchange from the backup as of 3 days ago (Thursday) but the exact same issue is still there. I really do not understand what is wrong and what else I can do to troubleshoot. If anyone has some pointers for me, I will surely accept them. Thanks, Mike

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >