Search Results

Search found 6523 results on 261 pages for 'route planning'.

Page 213/261 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

  • Does using a hexacore CPU make sense?

    - by Exa
    I'm currently planning to upgrade my computer system and I want to exchange CPU, board and RAM. I already had a look at some hexacore-CPUs from AMD and would like to know if it makes any sense to use such a CPU with six cores. Is there any software which really uses six cores? Especially in gaming? I'm using this PC mostly for gaming and from time to time for developing. I know that on the dual-core system (2 x 3GHz) I currently use, Visual Studio creates two instances of the compiler, one for each core. Would there be six instances of the compiler on a hexacore system for super fast compiling? Is there any software that uses six cores? Would running two applications cause the usage of more CPUs? (For example two CPUs for a game you're playing while two other CPUs are used for compiling at the same time) I hope someone can point out the benefits of a hexacore system. The OS would be Windows 7 64 Bit and I use the PC for gaming most of the time. (Crysis 2, CoD, stuff like that)

    Read the article

  • How do I do a mail merge that includes images?

    - by Ian Ringrose
    I am trying to find out the practicalities of doing a mail merge when each “record” to be merged on includes some images. I need to: print letters And envelopes Both the letters and the envelopes have: Fixed text Fixed images Text that come from the mail merge record Images that come from the mail merge record I don’t know if all images will be the same size for every record, so a bit of simple “on the fly” automatic formatting may be needed . I need to be able to repeat a single item if I get a problem (e.g when folding the letter). What problems am I likely to have? Is Word 2007 up to this sort of mail merging, or should I be looking at a report writing tool? How do I restart a print run after a printer jam etc? What format should I store the “records” and there images in? E.g Can standard software cope with images that are stored in separate files named after the “CustomerId” that is in the “record” (I can write custom software if needed, but would rather use standard “of-the-shelf” software for the printing, I am planning on custom software for the data creation, so can output in whatever format is needed)

    Read the article

  • G4 server running slow

    - by Abby Kach
    I have HP proliant ML 350 servers. We have 8 remote locations where users connect and log on to our server through DYNDNS to access our company ERP's to conduct day to day work. The base of our company ERP's is oracle for which we have a separate server.Now the problem is day by day the load on the server is increasing and the speed is getting slower and slower and users are facing a lot of issues . so I are planning to implement Sonic wall VPN. I conducted a demo of sonic wall but it was slower than the current speed of dyndns. the configuration of my server is as follows :- Linux HP ProLiant 370 Intel Xenon 3.20 GHZ 150 GB (72 * 2) 3 GB Suse Omega HP ProLiant 370 Intel Xenon 3.20 GHZ 300GB (72.8 * 4) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Storage Box HP Storage Works 1400 Intel Xenon 2.00 GHZ 4 TB(1 TB * 4) Raid 5 2 GB Windows Server 2K8 Enterprise Edition Domain & Terminal HP ProLiant 350 Intel Xenon 3.20 GHZ 250 GB(72.8 * 3) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Can some one help me as to how can i speed up my network at remote locations and reduce the problems of speed etc..

    Read the article

  • Sharing music on NAS with Zune and iPod?

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40GB worth of music and my girlfriend has her iPod music library around 30GB. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (Zune/iPod) seamlessly.

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • SFTP access without hassle

    - by enobayram
    I'm trying to provide access to a local folder for someone over the internet. After googling around a bit, I've come to the conclusion that SFTP is the safest thing to expose through the firewall to the chaotic and evil world of the Internet. I'm planning to use the openssh-server to this end. Even though I trust that openssh will stop a random attacker, I'm not so sure about the security of my computer once someone is connected through ssh. In particular, even if I don't give that person's user account any privileges whatsoever, he might just be able to "su" to, say, "nobody". And since I was never worried about such things before, I might have given some moderate privileges to nobody at some point (not sudo rights surely!). I would of course value your comments about giving privileges to nobody in the first place, but that's not the point, really. My aim is to give SFTP access to someone in such a sandboxed state that I shouldn't need to worry about such things (at least not more so than I should have done before). Is this really possible? Am I speaking nonsense or worried in vain?

    Read the article

  • Windows 7 breaks even in safe mode

    - by delenda
    Hi, I have a Dell XPS M1730 with Windows 7 installed. I noticed last night that after a few hours of use, the fans kicked into full and I couldn't do anything without it taking forever. Minimising windows, opening device manager or even opening process explorer took minutes and a game install I had just started took nearly 4 hours to complete. When procexp finally loaded, the refresh was so slow that it was mostly useless. From what I could gather, it was reporting 60% idle processes with procexp using nearly 40%. There were no hardware interrupts listed. When I rebooted, the problem went away for about 10 minutes and then the same thing happened. The issue persists in safe mode and even after I removed the graphics drivers, which have been an issue in the past, it still happens. Icons flash quite quickly on the desktop periodically and screen refresh is painfully slow. When booting now, the fans kick in to full as soon as the windows logon box comes up and it's taking 10 minutes to bring the desktop up. Chkdsk reports nothing and the raid check says that everything is fine. I'm thinking hardware failure, probably HDD but wanted some other opinions. I'm planning to try a linux live cd to see if it works without using the hard disks. If anyone has any input, it would be greatly appreciated. Delenda

    Read the article

  • Linux as a router for public networks

    - by nixnotwin
    My ISP had given me a /30 network. Later, when I wanted more public ips, I requested for a /29 network. I was told to keep using my earlier /30 network on the interface which is facing ISP, and the newly given /29 network should be used on the other interface which connects to my NAT router and servers. This is what I got from the isp: WAN IP: 179.xxx.4.128/30 CUSTOMER IP : 179.xxx.4.130 ISP GATEWAY IP:179.xxx.4.129 SUBNET : 255.255.255.252 LAN IPS: 179.xxx.139.224/29 GATEWAY IP :179.xxx.139.225 SUBNET : 255.255.255.248 I have a Ubuntu pc which has two interfaces. So I am planning to do the following: eth0 will be given 179.xxx.4.130/30 gateway 179.xxx.4.129 eth1 will be given 179.xxx.139.225/29 And I will have the following in the /etc/sysctl.conf: net.ipv4.ip_forward=1 These will be iptables rules: iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT My clients which have the ips 179.xxx.139.226/29 and 179.xxx.139.227/29 will be made to use 179.xxx.139.225/29 as gateway. Will this configuration work for me? Any comments? If it works, what iptables rules can I use to have a bit of security? P.S. Both networks are non-private and there is no NATing.

    Read the article

  • iPhone all day meeting request bug?

    - by RodH257
    We've come across a bit of a weird bug in the office. our office is closing for a week or so over christmas, and our admin staff sent a meeting request to everyone in the office to go from 5pm on the 23rd of December until we return to work at 8.30 on the 5th. However there was some confusion, as if you look at the image below, the same meeting request is showing up with different start times for some people. While most people get the 5pm date, for others it shows up as an all day event! Staff have been planning their activities for the 23rd only to find out that their calendar is wrong and they are required to work. With some investigation, we noticed that all of the people with the incorrect time own iPhones or iPads. So perhaps they accepted the meeting on their phones, and it has put the meeting in wrong? There are people with the correct one that have iPhones, but perhaps they are on ios4 still, or perhaps they didn't accept the meeting request on their phone. has anyone else come across this error at all? is there a fix?

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Computer slow after installing 32GB RAM

    - by John Gilmore
    I'm currently running very large network simulations for my PhD research, for which I need lots of RAM. I have a Core i7 2600K processor with a Gigabyte GA-Z68AP-D3 motherboard, running Windows 7 professional 64bit. I bought the system with 8GB (2x4GB) DDR3 1600 MHz Corsair Vengeance RAM and the system ran like a dream. I'm planning to upscale my simulations so I removed the 2x4GB RAM and installed 4x8GB DDR 1600 MHz Corsair Vengeance RAM. When I rebooted the system, boot time was much longer than usual (10 mins just to get to login screen). After logging in, the whole system was unresponsive. I tried playing some games (Bioshock 2), but it was unplayable. I've not had this problem before and I have an ATI Radeon HD 5850 graphics card, so that's not the problem. The only thing that's changed is the RAM. I've looked through the specifications of Windows, my motherboard and my CPU and they all state that 32GB of RAM is supported. Does anyone have an idea of what's going on? Any help would be greatly appreciated.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • If I run two monitors from two different graphic cards, can I still have Twinview?

    - by rumtscho
    I am planning to get a second 2560x1440 monitor for home. The trouble is, I only have 1xDVI, 1xVGA on my graphics card (a 250 GT). I don't want to buy a new graphics card until the prices for the 500 series have stabilized, so probably not before summer (or will it happen earlier? I don't remember how it was for other series, and I couldn't find long-term price history for video cards). The solution I had in mind is to get the 7600 GS from my old PC, which also has 1xDVI, 1xVGA, and run each monitor on a separate card. I have never done that, and I was wondering 1. If I will be able to run the monitors in Twinview then, or will I be stuck with separate X sessions, and 2. Whether there are some other disadvantages as compared to a single-head graphics card. (I am using the proprietary driver because I need compiz). As an aside, how do I find out whether the DVI port on the old graphics card is dual link?

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >