Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 449/750 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • How to determine if a CentOS system is Raid-1?

    - by Tedd Johnson
    I've tried searching for this answer, but haven't found anything elegant. I have numerous servers in a colo that is in another state. I need to find a way to check that the servers have RAID-1 on them, so that I can determine if they were setup correctly by my colo. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 442G 1.5G 418G 1% / /dev/sda1 99M 19M 75M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm however as CentOS uses LVM by default, this doesn't indicate if a RAID-1 is present. it is supposed to be a software raid, so I'm pretty sure there should be a way to check. Thanks

    Read the article

  • Need advice on PC components for high-end games but in limited budget

    - by Md Atif
    I need advise, I want to buy PC components which will be good for gaming as well as be under my budget. First thing I chose is Graphics card : ATI HD 7850 2GB DDR5 then matching things up with this selected following: Processor: AMD 3.5 GHz AM3 FX 8320 8 Core Piledriver Mobo: MSI 990FXA-GD65 RAM : 8GB DDR3 (4x2) Is this setup looks compatible? If making any of above component inferior will have negligible affect on performance/gaming(like buying 970 chip mobo instead of 990) then please let me know so I can save some money :) ? Any other advice?

    Read the article

  • need help upgrading small business wifi network

    - by Henry Jackson
    Our small business currently has 3 wireless access points around the building, each with their own SSID. Security is done with WEP (ick) and MAC address filtering (double ick). We are trying to reconfigure the setup, with these goals: wifi roaming between the access points user-based authentication that isn't as annoying as MAC address filtering. 1) The entire building is hardwired with ethernet, so I assume it should be easy to set up the routers to act as one big network, but I can't figure out how. Can someone point me in the right direction? The routers are consumer-grade linksys routers, is it possible to do this without getting new hardware? 2) For security, we will probably upgrade to WPA2, and I'm thinking of using the Enterprise version so that users can log in with a username, instead of having a single key (so if an employee leaves or something, their access can be removed). We have several on-site Windows servers, can one of them be set up as a RADIUS server, or is that best left to a dedicated machine (again, using existing hardware is good).

    Read the article

  • What is the best way to run ClamAV on Windows Server 2008 R2

    - by gabbsmo
    I'm hosting a Wordpress-site on Windows Server 2008 RS and want to scan all files that are uploaded by users for viruses using this plugin http://wordpress.org/extend/plugins/upload-scanner/. I'm on a really tight budget (no profit) so ClamAV seem like a good choice. What is the best way to run ClamAV under these circumstances? I'm concidering the following options: Just running the raw windows build from http://sourceforge.net/projects/clamav/ an setup definition updates with task scheduler. Any way to automate updates of the scanner (binaries)? Using a "distro" like ClamWin or Immunet (advertised on clamav.net). Any suggestions are welcome.

    Read the article

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • Error while loading shared libraries - libwebsock

    - by kittyPL
    Im trying to setup libwebsock, simple C websocket library. I followed the installation procedure from INSTALL file, everything went fine. Im able to compile test program given in the examples. But when I want to run my executable, wild error appears: ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory I checked /usr/local/lib twice, libwebsock.so.1 exists and is doing very well. I also tried copying the lib to the echo folder (so its placed next to binary), still same error. It's quite funny for me: shadowz@Ubu:~/WebSocket$ ls echo echo.c echo.cpp libwebsock.so.1 shadowz@Ubu:~/WebSocket$ ./echo ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory Any suggestions? Im running out of ideas...

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • links for 2010-06-15

    - by Bob Rhubart
    You're invited : Oracle Solaris Day, June 28th, Herzliya - Openomics How open innovation and technology adoption translates to business value, with stories from our developer support work at Sun ISV Engineering (tags: ping.fm) Edwin Biemond: Enriching and Forwarding your data with the Spring Component in SOA Suite 11g PS2 Oracle ACE Edwin Biemond describes "how easy it is to use Java in the Spring Component, how you can wire this Component to other Components, Services or References adapters." (tags: oracleace soa oracle middleware) Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 - Currency Conversions & FX Translations &ndash; Part 1 "As part of the BI EE setup we need to ensure that such local currency transactions are converted to a common reporting currency," says Rittman Mead's Venkatakrishnan. (tags: oracle businessintelligence) Richard Veryard: Ecosystem SOA 2 "What are the problems of large complex sociotechnical systems?" asks Rich Veryard?  "How far do SOA and enterprise architecture help to address this problem space, and what else might we need?" (tags: soa entarch) Khanderao Kand: Oracle BPM Suite .. unified engine.. "This Suite is based on unified process foundation of Oracle Business Process Management Suite 11g . It has the same engine that executes both BPEL and BPMN processes, " says Kand.  (tags: bpel soa bpm oracle) Webcast: Revealing the Secrets that will Re-Energize your Services Strategies  Oracle's Peter Heller and Robert Covington discuss how to overcome the many unforeseen technical and organizational barriers in order to meet the high expectations of dynamic business requirements in this live webcast, July 14, 2010, 9:00 AM PDT / Noon EDT (tags: entarch oracle webcast)

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • How can I make a vpn network login the default behavior for logging into Windows?

    - by Danny
    To login to the machine, I have to login to our domain. When I am at work, the unauthenticated wireless permits access to the domain. However, the internet is not available until I connect via the vpn. From home, I have to connect via the vpn first, then I can login to the domain. I have successfully setup a network logon with the vpn (following the directions found here). And for the most part it works correctly. (There is an issue with logout/login I haven't figured out just yet). As I currently have to Switch User and select the Network Login button, I'd like to know if it is possible to have the network login the default behavior when logging into the system. This is mostly a usability issue than anything else.

    Read the article

  • Installing Office 2010 without through group policy without an msi

    - by Ri Caragol
    I have been breaking my head for several days now trying to install Microsoft office 2010 through group policy. Unfortunately Microsoft decided it would be fun to release office without an MSI and so I either 1) need to create an msi for it or 2) need to install it through a logon script that would run the setup.exe from a network location. Any advise would be greatlly appreciated. I tried to create a script but even though I double click it and it runs properly, it does not seem to kick in when users log in or when the machine is turned on. Also is there an easy way to create an msi? thanks! -Ri

    Read the article

  • need help upgrading small business wifi network

    - by Henry Jackson
    Our small business currently has 3 wireless access points around the building, each with their own SSID. Security is done with WEP (ick) and MAC address filtering (double ick). We are trying to reconfigure the setup, with these goals: wifi roaming between the access points user-based authentication that isn't as annoying as MAC address filtering. 1) The entire building is hardwired with ethernet, so I assume it should be easy to set up the routers to act as one big network, but I can't figure out how. Can someone point me in the right direction? The routers are consumer-grade linksys routers, is it possible to do this without getting new hardware? 2) For security, we will probably upgrade to WPA2, and I'm thinking of using the Enterprise version so that users can log in with a username, instead of having a single key (so if an employee leaves or something, their access can be removed). We have several on-site Windows servers, can one of them be set up as a RADIUS server, or is that best left to a dedicated machine (again, using existing hardware is good).

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • Installing linux on a crippled machine via network boot?

    - by networkbooter
    I have a somewhat ancient Toshiba laptop (which can't boot from USB) that I want to install linux on (probably Ubuntu). I'm currently running Windows XP and Ubuntu via Wubi. I want to delete these OSs and replace with Ubuntu only. The laptop does have a network boot option. I'm wondering if the easiest way might be to setup a network boot server on my other computer (which runs Ubunutu) and boot the laptop from it. Could this allow me to install Linux on the laptop? I can't seem to find instructions on the 'net as to how to go about doing this.

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Encrypt SSD or not?

    - by JamesBradbury
    My desktop machine is running Ubuntu 12.04 (and will probably stay with it until the next LTS). I've got a new 120GB SSD on the way as my existing 420GB spinning disk. If it makes any difference I'll be dual-booting with Windows 7 across both disks too. I've read some helpful answers here about /home setup and enabling TRIM, which I intend to follow. So most of my /home will be on the SSD, with only photos, videos and music on the spinning disk. The question is, when I reinstall Ubuntu from CD or USB, whether I should encrypt the SSD? Specifically: I'm reading that drive wear isn't much of an issue with modern SSDs as they last decades even if you spam them. Is this true? How big a performance reduction will encrypting cause (I have an i7 Sandybridge, so I guess it can cope)? Is it more important from a security point of view to encrypt an SSD? I think I read somewhere that it may be hard to reliably wipe data. By all means answer even if you only know about one of those things.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Ubuntu 12.04 Server: permissions on /var/www for newly copied files

    - by Abe
    I ran the following commands to set up ACL on the /var/www folder in my Ubuntu 12.04 Server: sudo usermod -g www-data abe sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I downloaded Wordpress using wget in my /var/www folder and unzipped the downloaded file: cd /var/www wget http://wordpress.org/latest.zip mv latest.zip wordpress.zip unzip wordpress.zip I created a new database and user in mysql and attempted to run the setup process through the web interface. When I enter the configuration info in wordpress I run into the following error message: Sorry, but I can't write the wp-config.php file. When I run ls -la, I see that the files are owned by my user abe, but they are part of the group www-data. Would I have to run the chmod command every time I copy new files to /var/www? sudo chmod -R 775 /var/www

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop with their software on and I also have my own, where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the Large instance (cheapest 64-bit) for 8 hours a day for a year costs me 960$ per year, which is acceptable. I imagine that when I approach the workplace each day, I will make a single tap on my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you foresee?

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • Monitor Bonded Interface for Disconnection

    - by bradlis7
    I am trying to monitor for network failures on a machine, and one portion of that is to monitor interfaces that are intended to be active also be "RUNNING". An Ethernet port, such as eth0, will say "RUNNING" if it is physically connected to another device. The problem lies in the bonded interfaces, such as bond0. If all of the ethernet devices are disconnected, it still says that it is running, and it is still pingable. Is this by design, or is my system setup incorrectly? Does the miimon option have something to do with this?

    Read the article

  • Is vSphere's Data Recover appliance 'production-ready'?

    - by Chopper3
    I have a smallish lab environment (16 x ESX4iU1 hosts and VC4U1) that I periodically want to backup. Normally in production we snap to secondary SAN boxes then have disk-based VTL backups via NetBackup which eventually migrate to off-site removable disks but this seems like an overkill for my own kit. I've spent a bit of time with vSphere's 'Data Recovery' appliance, it was easy enough to setup and I've not really ran into any issues with it but that doesn't mean I trust it fully. Have you had any experiences with it, positive or negative that would help me decide whether to trust it or pay Symantec for more licences? Thank you in advance.

    Read the article

  • Replicating EFS encrypted files

    - by floyd
    Recently I attempted to configure Microsoft's DFSR on Windows 2008 R2 to replicate a folder which was encrypted with EFS. The setup gave no errors or warnings, but later I read that DFSR does not support EFS in any way. There were also event logs in the DFSR event log indicating an encrypted file was found and wont be replicated. http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_052 My question is, are there any tools that would allow this to occur? Software based preferably.This would be replicating over LAN to destination node for two servers on the same domain.

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >