Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 449/549 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Is there a standard method for assigning nameservers to servers in a Windows domain?

    - by HopelessN00b
    As the title asks, I'm wondering if there's any standard or "best practice" for how to actually assign nameservers (DNS) and manage the nameserver configuration for client servers on a Windows domain. I'm talking about the setting circled in the below image, in case the language of the question is not clear enough: This is for a large, multi-site environment, where ideally/hopefully all servers point at their site's Domain Controller as the primary DNS server, and a DNS server at a different site as the secondary DNS server. For simplicity's sake, we can say that the secondary server would be the Domain Controller at the home site for everyone, and there are no tertiary DNS servers (even though that's not actually the case). Try as I might, I can't seem to find a GPO setting for this (at least on FL 2003 R2, the Computer Configuration - Administrative Templates - Network - DNS Client - DNS Servers GPO is Supported on: Windows XP Professional only), and I find it rather hard to believe that the "best"/"standard" solution would therefore be either scripting up something to apply the DNS settings per site, or using a DHCP server to push those configurations out to the other servers via the DHCP Scope Options. So, is there a standard way of managing this configuration? (That's hopefully not "a script" or "DHCP Scope Option.)

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Ruby installed on Ubuntu 10.10 slow on one machine but not other

    - by Aaron Jensen
    I have a machine that was provisioned several months ago. RVM was used to install ruby 1.9.3-p125 as well as 1.9.3-p125-perf. When I compared raw ruby performance to another identical machine the older machine smoked them. For example: ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 3.790000 0.000000 3.790000 ( 3.800895) each 2.410000 0.000000 2.410000 ( 2.420860) any 3.960000 0.000000 3.960000 ( 3.972099) include 1.440000 0.000000 1.440000 ( 1.442862) ------------------------------------ total: 11.600000sec vs ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 10.740000 0.000000 10.740000 ( 10.769366) each 6.080000 0.010000 6.090000 ( 6.106323) any 10.600000 0.000000 10.600000 ( 10.641606) include 4.160000 0.000000 4.160000 ( 4.171530) ------------------------------------ total: 31.590000sec I attempted to reinstall 1.9.3-p125 with rvm on the fast machine and that ruby is now slow. It's as if something changed in RVM, or I installed some package that made compiled versions of ruby perform significantly worse. I know this is a tough question to answer, but what things should I look into in order to track down why the performance has suffered so much? edit I just attempted to install with ruby-build and the version installed was fast. Something rvm is doing to build it in my environment is slow.

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Ubuntu 12.04 Very slow especially with Android Studio

    - by Nada
    I have an old laptop with the following specification: Memory: 485 MiB, Processor: Genuine intel CPU T2300 @ 1.66 GHz ×2, OS Type: 32 bit, Disk: 78.1 GB, I installed on it Ubuntu 12.04 LTS and I noticed that the overall system is very slow in responding. I tried to search about that in the internet and I found some articles talking about how to make Ubuntu 12.04 LTS run fast I applied all what they said including download LXDE desktop environment and then nothing different in the system response time. Then I need to develop some android applications so, I download Android Studio (Beta) 0.8.6. The problem became worse than before whenever I tried to open the Android Studio the screen is frozen for some minutes then it took time to download the projects and initialize the work space also, when I tried to move the cursor he is move very slowly. When I tried to run my first application on the AVD it took three hours and still not run yet. I delete the Android Studio and install it again several times, I was trying to solve the problem but still nothing change. Please if you have any suggestions that may help me make my laptop and Android Studio work faster I will appreciate it for you. Thank you in advance.

    Read the article

  • GroupWise 6.5 IMAP Service Shuts Down-Needs Constant Restarting

    - by Jeffrey Majette
    I have an issue with my aging (and soon to be replaced) GroupWise 6.5 system. The IMAP service in the GWIA tends to periodically shut down, requiring that I restart the service. When IMAP is down, my end users who use Blackberry BIS cannot send or receive email on their devices. It's a royal pain to have to restart the IMAP service several times a day. The GWIA logs do not seem to indicate a problem. I thought I was on to something yesterday. I discovered that the gwia.cfg file located in SYS:SYSTEM was actually from GW 6.0. The gwia.cfg in DOMAIN\WPGATE\GWIA was titled for GW 6.5 when opening the file for editing. I changed the generic placeholder info in the file to match my environment and restarted the gwia. However, it made no difference in performance. The IMAP service shuts down about 30 minutes after the last restart. I know this version of GW is antiquated. We are in the process of migrating to Google Apps. But, if anyone has an idea that could fix this issue, I and my end user community would be forever grateful!

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Why can't a PC with 2 network cards be accessed by hostname?

    - by lewis
    I set up PC with 2 network cards, connected to the same LAN. I can connect to this PC (e.g. by remote desktop) only via ip-addresses. Accessing by hostname does not work. Why is this the case? UPDATE: Full environment 1. PC with 2 hardware network adapters. 2. On this PC installed VMWare Workstation. Created 3 VM's, networked by "bridged" network setting in VMWare. 3. In LAN all ip-addresses given from DHCP. 4. Win2k8 on all hosts (both physical and vitrual). As result: 1. PC has 2 ip-address (e.g. 192.168.1.71 and 192.168.1.72). PC available in LAN by ip-addreses, but not avail by hostname. 2. VM's has own ip-addr on each (e.g. 192.168.1.73, *74, *75 etc). They are available from LAN by their ip's, BUT not by their hostnames. How can I access to PC and to VM's by hostname?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Loading the preview function of AUCTeX 11.86 on macports Emacs-app 23.2.1 port.

    - by Sarah
    I've installed Emacs-app 23.2.1 via MacPorts and I'm trying to install AUCTeX 11.86 so that it will work on this installation. I've run the following configure line for AUCTeX and that seems to work. ./configure --with-emacs=/Applications/MacPorts/Emacs.app/Contents/MacOS/Emacs --with-lispdir=/Applications/MacPorts/Emacs.app/Contents/Resources/site-lisp/ --with-texmf-dir=/usr/local/texlive/2010basic/texmf-local/ make and make install seem to work, and I've added the following line to my init.el (require 'tex-site) as per the installation instructions. However, when I open a TeX file, the Preview menu does not show up (although the LaTeX menu does.) The following are some of my tests: M-x load-library RET preview-latex RET doesn't seem to do anything. M-x load-library RET preview RET brings up the Preview menu. Is it safe to somehow add the load-library preview to my init.el? Or do I risk mucking up something? I'm new to Emacs and primarily trying to learn it because of the AUCTeX preview features, but I don't feel very safe in this environment yet.

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • How to inactive Active Directory users, 1 month after their FIRST LOGIN, instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Active Directory users, so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after the first login. The user might not need his account right when purchases the license. In other words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. Working environment is Windows Server 2012. By the word 'user', I mean Active Directory Users.

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >