Search Results

Search found 10963 results on 439 pages for 'documentation generation'.

Page 340/439 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • external postfix forwarding to zimbra server

    - by Marko
    I want to migrate from my current mail server (old_server) for my domain mydomain.com. old_server setup is Postfix+LDAP+Cyrus. Now I want to migrate my domain mail to Zimbra server (zimbra), but I am considering option to leave current mail server working in the first phase, and then to only have subset of email addresses to be forwarded to zimbra server. It seems that zimbra refers this in their documentation as 'edge MTA'. Current config mydomain.com MX: old_server <---------- smtp send ----------> smtp receive New config mydomain.com MX: old_server zimbra <------------------------------------------- smtp send ----------> smtp receive ---- forward ----> smtp receive I need following: old_server to receive mail for my domain as before, but for some of the email addresses I want them to be delivered to zimbra server. I should be able to determine which email addresses will be forwarded. I would like to avoid possible false spam detections for mails from mydomain.com due to this setup. Questions: How should I configure postfix on old_server to support this mail forwarding? To avoid false spam detection, can I have outgoing mail from mydomain.com to be sent by zimbra or should I use old_server? Is there anything extra I would need to do in order to avoid possibility of my outgoing mails being marked as spam on other servers?

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • Plesk 10 - creating and using vhost.conf

    - by MrFidge
    I'm having some issues setting up and using a vhost.conf for one of my domains. So far none of the domains have required any extra configuration but now I need to use a PEAR module, so I'm looking to include /usr/share/pear in the PHP settings for the domain. vhost file created in /var/www/vhosts/domain.com/conf/vhost.conf <Directory /var/www/vhosts/domain.com/httpdocs> php_admin_value include_path ".:/usr/share/pear" </Directory> I then restart Plesk using: /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=domain.com Or as plesk says that command is obsolete in Plesk 10 I've tried using /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain domain.com And for good luck I've restarted apache too each time. Net result - none of the PEAR includes work unless I edit the include_path in /etc/php.ini! Any tips on how to get this MOFO working? I've had a look through the documentation but TBH I just don't have time to read 40 pages of Plesk manual for one line of code, this can't be that hard, surely! Thanks for any pointers, H

    Read the article

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • What is the correct iptables rule when NATing multiple private subnets?

    - by Jose Mendez
    I have a Centos minimal 6.5 acting as a router. eth0 is connected to a Cisco switch trunk port, allowing VLANs 200-213. I have several VLAN interfaces just as this link suggests: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html And have IPv4 forwarding, so all my network devices from any of the networks 200-213 can communicate with each other using this linux box as their router. Problem is, I need them to access the Internet, so I added the following rule: iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -j SNAT --to 1.1.1.56 1.1.1.56 is the "outside" address. This works fine, devices connected to the internal networks can ping Intertnet addresses BUT, they stop being able to talk to each other across subnets, so 192.168.211.55 can ping 8.8.8.8, but can't talk to 192.168.213.5. As soon as I do a service iptables restart to remove the rule, I can start talking across internal subnets again. What would be the correct way to set up NAT for multiple private subnets? Or maybe the correct way to set up forwarding?

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Difference between accessing a website using Local host and IP address

    - by Cdeez
    I have developed an ASP.NET website and deployed into my IIS server. Now to see that my IIS is installed fine, I type local host in my address bar, and I get the welcome screen of IIS and its documentation in a separate window. Now I gave the url of my website http://localhost/mysites/site2/Default.aspx I access my site. Also giving my IP address instead of local host like: http://192.168.1.46/mysites/site2/Default.aspx also works. Just out of curiosity I wanted to see what happens when I give my IP address in addressbar. It asks me a user name and password saying:The server 192.168.1.46:80 requires a user name and password. I donot know what user name and password it is asking, and as of my knowledge I thought localhost points to my own IP address internally. But what is the difference and also what username and password do I need for it? Update: On chrome and IE just giving localhost displays the welcome screen, but on mozilla, localhost is also asking for a username and password.

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • BIOS corrupted? How to proceed? Acer Travelmate 290

    - by dtlussier
    I have a older ACER Travelmate 290 (manufactured in 2002 or 2003), which I recently tried to upgrade to Ubuntu 9.10. After doing the upgrade process it appeared as if I had a problem with my x server configuration, as on the first reboot post-install I heard the Ubuntu startup sound, but had a black screen. I thought I would then reboot again to drop down into text-mode to trouble shoot the x configuration problem. However, when I tried rebooting, something went wrong and since then when I start up the machine I get absolutely nothing except the first hardware check (i.e. HDD light flashes, CD/DVD drive spins, etc.). Other than that the screen remains totally black and I have no HDD or processor activity at all. I have tried restarting it a number of time holding down all kinds of key combinations to try and coax it into the BIOS (if possible) with no luck. I have also tried putting in both a live Linux disc and a Windows install disk without any luck. With a disk in the drive it will spin for a few seconds and then stop. All this has lead me to suspect that the BIOS is somehow corrupt (not sure about the right terminology). I have tried putting a new BIOS image and installer program downloaded from ACER on a USB key to see if it will run when I start up the machine, but no luck. I'm not sure if this method of interacting/updating/flashing the BIOS will work outside of Windows/DOS as both OSs are mentioned kind of ambiguously in the documentation. I have also taken the laptop case apart and inspected the various cards and cannot find any obviously burned out components. I'm not sure how to proceed at this point in terms of components to try, or how to try and load a new BIOS image onto the board. Any advice here would be great, especially from those with experience with this particular line of laptops. Thanks!

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • compile kernel 2.6.34 for Ubuntu Lucid for xen dom0 / pvops

    - by andreash
    Hi there, I'd like to compile a recent Linux kernel (2.6.34) for my Ubuntu 10.04 Lucid Lynx AMD64 box, mainly because I'd like to use it as a dom0 kernel with the recent xen4. There's plenty documentation on the web about how to compile a kernel 'Debian style'. But what I think would be nice to start with an 'official' Ubuntu config to be sure not to miss any important things and having to recompile over and over again. So what I'd like to do is compile 2.6.34, but starting with the 'official' /boot/config-2.6.32-XX from Ubuntu Lucid. The question is: How do I best do that? If I just take the config from 2.6.32, the new features from 2.6.33/34 won't be in the config. So what I'd like to do is somehow the 2.6.34 config with the original 2.6.32 one from Ubuntu. How can I best do that? Does it even make sense? Is there easier ways to achieve what I want? Thanks for your insight! A. PS: I just found a linux-image-2.6.32-bpo.4-xen-amd64 package on backports.org, but no information about it. Would it work as a dom0 kernel on Lucid?

    Read the article

  • Can you configure multiple KMS hosts in a primary / secondary relationship?

    - by Mark Hall
    We have two datacenters in our environment: primary and DR. I need to deploy a KMS service, and to be proactive, I would like to have a host in both datacenters. From what I have read, you can have up to 6 hosts without calling Microsoft, and it appears that what will happen is that a SRV record for each host will be placed in DNS. The client will query for those SRV records and randomly choose a host for the initial activation and will use that same server for all renewals. The server can be changed manually through a script and will automatically change if the initial server is unavailable when activating or renewing. My question is has anyone found a way to designate one server as the primary KMS host and designate the other as failover only? The reason I ask is that it is preferred that the client communicate with the primary datacenter during normal operations and only talk to the DR datacenter when needed because the bandwidth between the offices and the DR datacenter is limited compared to the primary. I am sure that this has been done before but I can not find it MSFT's documentation. Thanks, Mark

    Read the article

  • RAID 5 Install on Ubuntu Server 12.04 [closed]

    - by tarabyte
    Environment: Ubuntu Server 12.04, installing from bootable flash drive Error: No root file system is defined. Please correct this from the partitioning menu. I'm trying to set up a personal file server with software RAID 5. I just got three hard drives for this, but haven't found any solid documentation. I'm unsure what the basic way to partition my hard drives is. Can someone upload a screenshot of their "partition disks" screen so that I can compare with mine (attached)? Should I set the bootable flag? Do I need a /home partition? A /boot partition? Should I "Use [my partition] as: Ext4 journaling file system"? Or make that field "physical volume for RAID"? I am an engineer, but I have only a cursory knowledge of all-things-linux. If you know of any good learning resources I'd be happy to hear about those too (that way I don't have to blindly follow deprecated tutorials online). well, image would be here but i don't have a high enough reputation yet (please vote up :)) Thank you, References I've looked into: https://help.ubuntu.com/community/Installation/SoftwareRAID https://help.ubuntu.com/12.04/serverguide/advanced-installation.html http://forevergeeks.com/setup-ubuntu-server-with-raid-5/

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • Samba and Windows 7

    - by John Gaughan
    I built a new computer with the intention of it being primarily a home file server. Here is my setup: one desktop with Windows 7 64 HP one laptop with Windows 7 64 HP one desktop with Kubuntu 11.10 (server) The two desktops use static IPs, and I have hostnames mapped in the HOSTS files on all three systems. I have the same username/password combo on all three systems. I have been trying for a while now to set up Samba so the Windows 7 systems can see and use it. Even if I can get the server to show up, Windows is unable to log in. One of the first things I did was to enable LMv2 authentication, which this version of Samba (3.5.11) supports. The workgroup is set correctly. I can normally see the server, but cannot authenticate. Windows homegroup is turned off. Pinging between machines works fine, and the two Windows 7 systems work together flawlessly. What I am trying to do is set up Samba to use peer to peer networking using NTLM security and user-mode authentication. According to the documentation this is possible, but there are no examples that I could find. In all the googling I have done, I see a lot of people asking how to set this up but it either works for someone else and not for me (no idea what I'm missing), or it doesn't work. Has anyone gotten this to work? Is there a place I could download a smb.conf that is set up to work in this environment?

    Read the article

  • Delay NTP Initialisation, Cisco 877W, IOS 12.4(24)T1

    - by Mike Insch
    I have a Cisco 877W which I'm using for my home ADSL connection (and as a refresher in Cisco IOS). I've got a working config in-place with my PPPoA connection coming online correctly, and VLANs and other settings configured as I want them, but I can't crack the NTP configuration. For NTP, I have the following defined ntp server 0.uk.pool.ntp.org source Dialer0 ntp server 1.uk.pool.ntp.org source Dialer0 ntp server 2.uk.pool.ntp.org source Dialer0 ntp server 3.uk.pool.ntp.org source Dialer0 This setup works fine when issued in Global Configuration Mode when the Dialer0 interface (ATM0.1) is up. The configuration fails at startup though: Translating "1.uk.pool.ntp.org"...domain server (208.67.222.222) (208.67.220.220) ntp server 1.uk.pool.ntp.org source Dialer0 ^ % Invalid input detected at "^" marker. This is repeated for the other servers defined. Obviously the DNS lookup for the server(s) fails because the DNS servers cannot be accessed because the external interface is not yet online. Is there a way to delay the NTP configuration until afte the Dialer0 interface is fully initialised? Can the NTP commands be triggered by the Line Protocol on the Dialer0 interface transitioning to the up state? Alternatively, can the NTP commands be delayed for 5 minutes after the router has finished initialising? Any advice, or pointers to useful documentation or examples gratefully received ...

    Read the article

  • Active Directory Restricted Group confusion

    - by pepoluan
    I am trying to implement Restricted Group policy for my company's AD infrastructure, namely standardizing the local "Administrators" group. The documentation (and various webpages) said that the "Members of this group" policy will wipe out the "Administrators" group. However, an experiment made me confused: I created 2 GPOs: GPO-A replaces the Local Administrators with a list of domain users (e.g., "Alice" and "Bob") GPO-B inserts a domain user (e.g., "Charlie" -- not part of GPO A) into the Local Administrators Experiment 1: GPO-A gets applied first (link order 2) Everything happens as expected: GPO-A cleans out Local Admins and add "Alice" & "Bob" gets added; GPO-B adds "Charlie". Experiment 2: GPO-B is applied first What happens: "Charlie" gets added to the Local Admins group (which also contains 2 local users) The local users on the PC gets deleted, and "Alice" and "Bob" gets added. Result: Local Admins contain "Alice", "Bob", and "Charlie" My confusion: In Experiment 2, I thought GPO-A will totally erase the Local Admins group, including users added by GPO-B (since GPO-A gets applied after GPO-B). As it happens, it only erase local users from the Local Admins, but keeps the domain users. So, is that the way it should be? Or am I doing something incorrectly?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >