Search Results

Search found 147 results on 6 pages for 'offsite'.

Page 6/6 | < Previous Page | 2 3 4 5 6 

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Did my registrar screw up or is this how name server propagation works?

    - by Brad
    So my company has a number of domains with a large registrar that shall go unnamed. We are making some changes to our DNS infrastructure and the first of those is we are moving our secondary DNS from one server on site to four servers offsite. So we updated the name servers for each domain at the registrar by removing the entry for the old secondary name server and adding the four new ones. I monitored the old secondary server for requests and when I saw no new requests had been made for 24 hours I shut it down. That was this morning. I assumed at this point everything was good. Unfortunately this was my mistake. I should have gone and made sure name servers at large were returning the correct NS records. So this afternoon we were performing maintenance on our primary DNS server and we shut it down. This is when I started getting alerts from our external monitoring. I checked and sure enough, the DNS server used there reported the only NS record for our primary domain was the primary name server. The new secondary servers were not listed and neither was the old secondary. Is it unreasonable of me to have assumed that because the update was from ns1.mydomain.com ns2.mydomain.com to ns1.mydomain.com ns1.backupdns.com ns2.backupdns.com ns3.backupdns.com ns4.backupdns.com in one step at the registrar that there should be no intermediate state where the only NS record was for ns1.mydomain.com? Going forward to be safe obviously I will always leave the old name servers alone until after I'm 100% sure the new ones have propagated and only then remove the old name servers from the registrar. However, I'd still like to know if my registrar screwed up or if my expectation was unreasonable.

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • What's the proper way to prepare chroot to recover a broken Linux installation?

    - by ~quack
    This question relates to questions that are asked often. The procedure is frequently mentioned or linked to offsite, but is not often clearly and correctly stated. In an objective to concentrate useful information in one place, this question seeks to provide a clear, correct reference for this procedure. What are the proper steps to prepare a chroot environment for a recovery procedure? In many situations, repairing a broken Linux installation is best done from within the installation. But if the system won't boot, how do you fix it from within? Let's assume you manage to boot into an alternate system. Once there, you need to access your broken installation in order to fix it. Many recovery How-Tos recommend using chroot in order to run programs as if you are actually booted into the broken installation. What is the basic procedure? Are there accepted best-practices to follow? What variables need to be considered in order to adapt the basic preparation steps to a particular recovery task? As this is Community Wiki, feel free to edit this question to improve it as well.

    Read the article

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • Reorganizing development environment for single developer/small shop

    - by Matthew
    I have been developing for my company for approximately three years. We serve up a web portal using Microsoft .NET and MS SQL Server on DotNetNuke. I am going to leave my job full time at the end of April. I am leaving on good terms, and I really care about this company and the state of the web project. Because I haven't worked in a team environment in a long time, I have probably lost touch with what 'real' setups look like. When I leave, I predict the company will either find another developer to take over, or at least have developers work on a contractual basis. Because I have not worked with other developers, I am very concerned with leaving the company (and the developer they hire) with a jumbled mess. I'd like to believe I am a good developer and everything makes sense, but I have no way to tell. My question, is how do I set up the development environment, so the company and the next developer will have little trouble getting started? What would you as a developer like in place before working on a project you've never worked on? Here's some relevant information: There is a development server onsite and a production server offsite in a data center . There is a server where backups and source code (Sourcegear Vault) are stored. There is no formal documentation but there are comments in the code. The company budget is tight so free suggestions will help the best. I will be around after the end of April on a consulting basis so I can ask simple questions but I will not be available full time to train someone

    Read the article

  • EPL (Eclipse Public Licence) for commercial usage

    - by code-gijoe
    Hi, I'm developing an application which requires a third party framework which is under an Eclipse Public Licence (EPL). The application is a server-side commercial application which will be running on my servers. The EPL software is distributed as binaries (jar files). I'm only using the packages and am not making any contribution, i.e. not making any changes to the source. Under EPL I believe I'm not a "Contributor" nor am I making a "Contribution". But if I want to make my software available to be installed at some offsite server I'm having trouble with REQUIREMENTS of EPL: b.iv - "states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange". Does this mean that if I where to modify the source code of the 3rd party framework for my own purposes I would need to distribute all of my source code? EPL is supposed to be commercially friendly but it doesn't seem that way to me. Thank you.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Ideal backup appliance for backup software like Bacula?

    - by Ricket
    I'm at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). The backup is done only on the servers' hard drives, because our client computers and employees make sure not to store anything worthwhile on their own computers. So it's a pretty simple situation. Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? And should I consider switching to tape drives? What other concerns should I be thinking about when I pick out hardware for a backup server? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Rob Blackwell on interoperability and Azure

    - by Eric Nelson
    At QCon in March we had a sample Azure application implemented in both Java and Ruby to demonstrate that the Windows Azure Platform is not just about .NET. The following is an interesting interview with Rob Blackwell, the R&D director of the partner who implemented the application. UK Interoperability Team Interviews Rob Blackwell, R&D Director at Active Web Solutions. Is Microsoft taking interoperability seriously? Yes. In the past, I think Microsoft has, quite rightly come in for criticism, but architects and developers should look at this again. The Interoperability Bridges site (http://www.interoperabilitybridges.com/ ) shows a wide range of projects that allow interoperability from Java, Ruby and PHP for example. The Windows Azure platform has been architected with interoperable APIs in mind. It's straightforward to access the various storage facilities from just about any language or platform. Azure compute is capable of running more than just C# applications! Why is interoperability important to you? My company provides consultancy and bespoke development services. We're a Microsoft Gold Partner, but we live in the real world where companies have a mix of technologies provided by a variety of vendors. When developing an enterprise software solution, you rarely have a completely blank canvas. We often see integration scenarios where we need to exchange data with legacy systems. It's not unusual to see modern Silverlight applications being built on top of Java or Mainframe based back ends. Could you give us some examples of where interoperability has been important for your projects? We developed an innovative Sea Safety system for the RNLI Lifeboats here in the UK. Commercial Fishing is one of the most dangerous professions and we helped developed the MOB Guardian System which uses satellite technology and man overboard devices to raise the alarm when a fisherman gets into trouble. The solution is implemented in .NET running on Windows, but without interoperable standards, it would have been impossible to communicate with the satellite gateway technology. For more information, please see the case study: http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005892 More recently, we were asked to build a web site to accompany the QCon 2010 conference in London to help demonstrate and promote interoperability. We built the site using Java and Restlet and hosted it in Windows Azure Compute. The site accepts feedback from visitors and all the data is stored in Windows Azure Storage. We also ported the application to Ruby on Rails for demonstration purposes. Visitors to the stand were surprised that this was even possible. Why should Java developers be interested in Windows Azure? Windows Azure Storage consists of Blobs, Queues and Tables. The storage is scalable, durable, secure and cost-effective. Using the WindowsAzure4j library, it's easy to use, and takes just a few lines of code. If you are writing an application with large data storage requirements, or you want an offsite backup, it makes a lot of sense. Running Java applications in Azure Compute is straightforward with tools like the Tomcat Solution Accelerator (http://code.msdn.microsoft.com/winazuretomcat )and AzureRunMe (http://azurerunme.codeplex.com/ ). The Windows Azure AppFabric Service Bus can also be used to connect heterogeneous systems running on different networks and in different data centres. How can The Service Bus be considered an interoperability solution? I think that the Windows Azure AppFabric Service Bus is one of Microsoft’s best kept secrets. Think of it as “a globally scalable application plumbing kit in the sky”. If you have used Enterprise Service Buses before, you’ll be familiar with the concept. Applications can connect to the service bus to securely exchange data – these can be point to point or multicast links. With the AppFabric Service Bus, the applications can exist anywhere that has access to the Internet and the connections can traverse firewalls. This makes it easy to extend or scale your application or reach out to other networks and technologies. For example, let’s say you have a SQL Server database running on premises and you want to expose the data to a Java application running in the cloud. You could set up a point to point Service Bus connection and use JDBC. Traditionally this would have been difficult or impossible without punching holes in firewalls and compromising security. Rob Blackwell is R&D Director at Active Web Solutions, www.aws.net , a Microsoft Gold Partner specialising in leading edge software solutions. He is an occasional writer and conference speaker and blogs at www.robblackwell.org.uk Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • XNA Notes 004

    - by George Clingerman
    The XNA community has been crazy busy again. It always make me fee like such a slacker collecting all of these notes as I see the tremendous output from people all over the world and it’s incredible and humbling. There are some amazingly skilled people working with XNA. On another not, I’m going to take a minute to get on my soapbox and say, if you are developing ANYTHING and are not using some sort of source/revision control, START IMMEDIATELY. This applies to teams of one. Projects for fun. And “I back up my hard drive” or “I use dropbox!” does NOT count as using source control. You’ll be doing yourself a HUGE favor if you find one, learn to use it and integrate it into your everyday workflow. I personally use Subversion. It’s hosted offsite at xp.dev.com and I use TortoiseSVN as my front end to interface with the repository. It’s simple and easy to use and has saved me from myself so many time. Honestly, get setup with some type of source control immediately. If you don’t understand how, grab another developer that does and have them walk you through setup and the basics of using it. Ok, I’m done. On to the notes… The XNA Team Only 14 days left to Submit XNA GS 3.1 Games! http://blogs.msdn.com/b/xna/archive/2011/01/24/14-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx Shawn Hargreaves shares some great information on Exception Handling best practices on the XNA forums http://forums.create.msdn.com/forums/p/73333/448556.aspx#448556 http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx XNA MVPs @CatalinZima gives us a peek at Chicken’s Can’t Fly http://www.amusedsloth.com/games/chickens-cant-fly/ Screen-space deformations in XNA for WP7 from Catalin Zima http://twitter.com/CatalinZima/statuses/30313083767357440 http://www.amusedsloth.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ XNA Developers Going to GDC? Don’t miss the XNA panel hosted by a plethora of well known XNA community names! http://forums.create.msdn.com/forums/p/73576/448842.aspx#448842 MasterBlud does an interview with @Xalterax http://twitter.com/MasterBlud/statuses/28510774812999680 http://www.xboxhornet.com/wordpress/?p=7102 Luke Schneider of Radiangames posts about The Radiangames Style http://radiangames.com/?p=532 Holmade Games had a “vote for the new playable character” poll going on for Hurdle Turtle this past week http://holmadegames.blogspot.com/2011/01/new-level-pack-vote-for-your-favorite.html IGF v0.1.0.0 release post mortem http://indiefreaks.com/2011/01/24/v0-1-0-0-release-post-mortem/ James an Super Dunner post Good Morning Gato #46 and a look at the Vampire Smile box art http://www.ska-studios.com/2011/01/21/good-morning-gato-46/ http://www.ska-studios.com/2011/01/20/vampire-smiles-digital-box-art/ Alfredo Di Napoli creates Cow Pong using XNA and F#! http://alfredodinapoli.wordpress.com/2011/01/25/cow-pong-a-simple-xna-game-in-f/ Xbox LIVE Indie Games Signed In Podcast posts Episode #61 http://www.signedinpodcast.com/?p=559 Gamergeddon posts the January 23rd edition of XBLIG Round Up http://www.gamergeddon.com/2011/01/23/xbox-indie-games-round-up-january-23rd/ Indie Asylum posts Antipole Review http://www.indieasylum.com/reviews/38-xblig/112-antipole.html 1UPOrPosion Reviews OSR Unhinged http://www.1uporpoison.com/xblig/osr-unhinged/ DarkstarMatryx review Warbirds at Work http://www.darkstarmatryx.com/?p=185 Review of Aban Hawkins and the 1000 Spikes http://www.armlessoctopus.com/2011/01/24/xbox-indie-review-aban-hawkins-the-1000-spikes/ XboxHornet reviews Corrupted http://www.xboxhornet.com/wordpress/?p=7123 XBLIG 2010: The Best And The Worst http://www.gamasutra.com/blogs/JamieMann/20110121/6840/ Xbox LIVE Arcade Sales Analysis - an interesting read for XBLIG developers wondering how they’re doing compared to arcade.. http://www.gamerbytes.com/2011/01/xbla_sales_analysis_dec_2010.php Best of Indies for January 25th http://www.thisisfakediy.co.uk/articles/games/best-of-the-indies-25th-january-2011 Decimation X3 appears as an arcade machine in the wild! http://twitter.com/mdoucette/statuses/29605562484260864 XNA Game Development Guiseppe De Francesco (@PinoEire) announced Torque X 4.0 CEV is now in RC phase! http://www.garagegames.com/community/blogs/view/20779 DrMistry of mstargames shares his struggle (and mistakes) with learning to use the Content Pipeline http://www.mstargames.co.uk/mistryblogmain/35-genblog/181-pontent-cipeline-more-like-it.html New Tutorial posted XNA 2D Basic Collision Detection with Rotation from Ioannis Panagopoulos http://www.progware.org/Blog/post/XNA-2D-Basic-Collision-Detection-with-Rotation.aspx Sgt. Conker roars to life! Doing a much better (and prettier) job of collecting XNA news from around the interwebs. http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/dedication-for-captain-boki/ http://www.sgtconker.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ http://www.sgtconker.com/2011/01/xna-4-0-light-pre-pass-2/ http://www.sgtconker.com/2011/01/indiefreaks-game-framework-0-1-0-0-released/ Offering a little free publicity for XBLIGs http://forums.create.msdn.com/forums/p/73465/448321.aspx#448321 Ben Kane writes about building loot tables from Excel using the Content Pipeline http://benkane.wordpress.com/2011/01/23/building-loot-tables-from-excel-using-the-content-pipeline/ Good tips on attracting a game artist AND an offer to create your cover art for FREE http://forums.create.msdn.com/forums/t/72998.aspx If you’re an XBLIG developer keeping your eye on places to release on the PC, might want to be watching the IndieCity blog. Seems like these guys are well on their way to constructing something worth watching. http://www.indiecity.com/blog/ DVMGames spotted a new crowd-funding site for Indies http://twitter.com/DVMGames/statuses/29947274767372289 http://www.8bitfunding.com/ Transmute continues to make progress and there’s a nice dev blog to follow along here http://forgottenstarstudios.com/blog/

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Cocoa WebView won't render all images on OSX 10.8

    - by user2906962
    I'm currently developing an application for OS X, backwards compatible with OS X 10.6. At some point I create a WebView in which I load html content that I create dynamically. The html content is formed only of image links <img src= and text, there is no javascript or anything of that kind. All the images (there are only 5 png images) are stored locally and their size is 4 KB. The problem I have is that some images (those that are not on the visible side of the "scroll"), the very first time I run the application,the images are not shown unless I drag the window to another screen or load again the view controller that contains the WebView. In those cases the images appear on the "scroll" even if they are offsite. I've tried creating the WebView both with IB and programatically, I've used WebPreferences like Autosaves, AllowsAnimatedImages … I've tried using NSURLCache to load each image so that the WebView will get access to them easier ... same result. Taking into account that my code is quite extensive I'm gonna post only the bits that I think are relevant: NSString *finalHtml ... //contains the complete html CGRect screenRect = [self.fixedView bounds]; CGRect webFrame = CGRectMake(0.0f, 0.0f, screenRect.size.width, screenRect.size.height); self.miwebView=[[WebView alloc] initWithFrame:webFrame]; [self.miwebView setEditable:NO]; [self.miwebView setUIDelegate:self]; ... NSURLCache *URLCache = [[NSURLCache alloc] initWithMemoryCapacity:4 * 1024 * 1024 diskCapacity:20 * 1024 * 1024 diskPath:nil]; [NSURLCache setSharedURLCache:URLCache]; NSString *imagePath = [[NSBundle mainBundle] pathForResource:@"line" ofType:@"png"]; NSURL *resourceUrl = [NSURL URLWithString:imagePath]; NSURLRequest *request = [NSURLRequest requestWithURL:resourceUrl cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0f]; [URLCache cachedResponseForRequest:request]; ... [self.miwebView setResourceLoadDelegate:self]; WebPreferences *webPref = [[WebPreferences alloc]init]; [webPref setAutosaves:YES]; [webPref setAllowsAnimatedImages:YES]; [webPref setAllowsAnimatedImageLooping:YES]; [self.miwebView setPreferences:webPref]; NSString *pathResult = [[NSBundle mainBundle] bundlePath]; NSURL *baseURLRes = [NSURL fileURLWithPath:pathResult]; [[self.miwebView mainFrame] loadHTMLString:finalHtml baseURL:baseURLRes]; [self.fixedView addSubview:self.miwebView]; I should also mention that if an image is caught somewhere in between the visible and non visible side of the "scroll" only the visible bit of the image is going to be rendered even if the page gets scrolled up ... so I think all this is some rendering issue ... I appreciate your help, thank you!

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • Site-to-Site PPTP VPN connection between two Windows Server 2008 R2 servers

    - by steve_eyre
    We have two Windows Server 2008 R2 machines, one in our main office and one in a new office which we have just moved offsite. The main office has previously been handling client-to-server PPTP VPN connections. Now that we have moved our second server out of office, we want to set up a demand-dial or persistent VPN connection from the second server to the primary. Using a custom setting RRAS profile, we have successfully managed to set up a site-to-site VPN connection so that from the second server itself, it can access any of the devices in the main office and communicate back. However, any connected machines in the second office cannot use this connection, even when using the second server as gateway. The demand-dial interface is setup from the Second Server dialing into Main Server and a static route set up on RRAS for 192.168.0.0 with subnet mask 255.255.0.0 pointing down this network interface. The main office has the network of 192.168.0.0/16 (subnet mask 255.255.0.0). The second office has the network of 172.16.100.0/24 (subnet mask 255.255.255.0). What steps do we need to take to ensure traffic from the second office PCs going towards 192.168.x.x addresses use the VPN route? Many Thanks in advance for any help the community can offer. Debug Information Here is the route print output from the second server: =========================================================================== Interface List 23...........................Main Office 22...........................RAS (Dial In) Interface 16...e0 db 55 12 fa 02 ......Local Area Connection - Virtual Network 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 24...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.16.100.250 172.16.100.222 261 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 <MAIN OFFICE IP> 255.255.255.255 172.16.100.250 172.16.100.222 6 172.16.100.0 255.255.255.0 On-link 172.16.100.222 261 172.16.100.113 255.255.255.255 On-link 172.16.100.113 306 172.16.100.222 255.255.255.255 On-link 172.16.100.222 261 172.16.100.223 255.255.255.255 On-link 172.16.100.222 261 172.16.100.224 255.255.255.255 On-link 172.16.100.222 261 172.16.100.225 255.255.255.255 On-link 172.16.100.222 261 172.16.100.226 255.255.255.255 On-link 172.16.100.222 261 172.16.100.227 255.255.255.255 On-link 172.16.100.222 261 172.16.100.228 255.255.255.255 On-link 172.16.100.222 261 172.16.100.229 255.255.255.255 On-link 172.16.100.222 261 172.16.100.230 255.255.255.255 On-link 172.16.100.222 261 172.16.100.255 255.255.255.255 On-link 172.16.100.222 261 192.168.0.0 255.255.0.0 192.168.101.87 192.168.101.17 266 192.168.101.17 255.255.255.255 On-link 192.168.101.17 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.16.100.222 261 224.0.0.0 240.0.0.0 On-link 172.16.100.113 306 224.0.0.0 240.0.0.0 On-link 192.168.101.17 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.16.100.222 261 255.255.255.255 255.255.255.255 On-link 172.16.100.113 306 255.255.255.255 255.255.255.255 On-link 192.168.101.17 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.0.200 Default 0.0.0.0 0.0.0.0 172.16.100.250 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 16 261 fe80::/64 On-link 16 261 fe80::edf4:85c6:3c15:dcbe/128 On-link 1 306 ff00::/8 On-link 16 261 ff00::/8 On-link 22 306 ff00::/8 On-link =========================================================================== Persistent Routes: None And here is the route print from one of the second office PCs: =========================================================================== Interface List 11...10 78 d2 32 53 27 ......Atheros AR8151 PCI-E Gigabit Ethernet Controller 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.16.100.250 172.16.100.103 10 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.16.100.0 255.255.255.0 On-link 172.16.100.103 266 172.16.100.103 255.255.255.255 On-link 172.16.100.103 266 172.16.100.255 255.255.255.255 On-link 172.16.100.103 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.16.100.103 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.16.100.103 266 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 11 266 fe80::/64 On-link 11 266 fe80::e973:de17:a045:aa78/128 On-link 1 306 ff00::/8 On-link 11 266 ff00::/8 On-link =========================================================================== Persistent Routes: None

    Read the article

< Previous Page | 2 3 4 5 6