Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 120/500 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different. i have moved this question to server fault http://serverfault.com/questions/438156/binding-mysql-to-run-from-the-public-or-private-lan-ip-address-which-one-is-fa

    Read the article

  • What is a good and safe way of sharing certificates?

    - by Kaustubh P
    I have a few certificates, that are used as authentication, to ssh into my servers on the Amazon cloud. I rotate those certificates weekly, manually. My question is, I need to share the certificates with some colleagues, a few on the LAN, and a few in another part of the country. What is the best practice to share the certificate? My initial thoughts were Dropbox and email. We dont host dedicated email servers with encryption and all, and dont have a VPN. Thanks.

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • Only one user at once through remote

    - by Lazlo
    Hi, This is probably an easy question for anyone used to servers, and I know I once managed to do it, but I don't remember how. I purchased a VPS and am able to connect correctly as Administrator, and can start, let's say, MyServer.exe. Problem is, if I connect as Administrator on another device, this process is still there, but I can't see it. What I want to do is limit the connection per user to 1, and disconnect others when one logs in. I know there was a simpler term, a simple way, but I truly don't remember. And since I'm not used to the vocabulary of servers, I couldn't find it in the S/F questions. Thanks in advance!

    Read the article

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • Should I install an AV product on my domain controller?

    - by mhud
    Should I run a server-specific antivirus, regular antivirus, or no antivirus at all on my servers, particularly my Domain Controllers? Here's some background about why I'm asking this question: I've never questioned that antivirus software should be running on all windows machines, period. Lately I've had some obscure Active Directory related issues that I have tracked down to antivirus software running on our domain controllers. The specific issue was that Symantec Endpoint Protection was running on all domain controllers. Occasionally, our Exchange server triggered a false-positive in Symantec's "Network Threat Protection" on each DC in sequence. After exhausting access to all DCs, Exchange began refusing requests, presumably because it could not communicate with any Global Catalog servers or perform any authentication. Outages would last about ten minutes at a time, and would occur once every few days. It took a long time to isolate the problem because it was not easily reproducible and generally investigation was done after the issue resolved itself.

    Read the article

  • Consolidate SQL Server Reporting Services

    - by Eric C. Singer
    I've been a big fan of consolidating as many DB's to a few SQL servers for a while and I've had great success with it. However, I've never had to deal with SQL reporting services. Has anyone migrated SSRS from a bunch of random SQL servers into a consolidated SQL server? I don't exactly know a whole lot about SSRS which is part of the problem. To my knowlege, it's one DB per SSRS instance, so it sounds like i'd need to find a way of exporting data and merging it. Basically the process used to look like: Move DB from SQL Express to shared SQL server Change point in APP to point at new SQL server With reporting services, how do I move the reporting service compenent of the DB as well? I realize I may need to tweak the app, but my question is on the SQL side.

    Read the article

  • Development Server Blocked Only from Home

    - by theonlylos
    Recently I've been having an issue with my CentOS 6 test server running Apache and Webmin running on port 10000 where when I try accessing any part of the server - SSH/FTP and even my domains (I have two - both keep getting timeout errors) when I try accessing from any computer on my home network. However when I access via tethering or via my office networks everything loads fine. While the firewall is the first issue at mind, my router never was set to block any special ports, and even after adding port 10000 as a specific exception I'm having no luck. Also, I doubt this is an IP blacklisting issue because I have websites on other servers using CloudFlare for security and I haven't gotten any warnings. Any assistance is greatly apprecaiated. UPDATE: Just some extra details about the issue: My ISP to my knowledge only blocks off ports 25 and 80 for residential users to prevent them from running web servers - however this issue has only come up a day or two ago, before that I was using the server successfully for months. Also the server is not physically located in any of my workspaces - it's a VPS housed in a datacenter

    Read the article

  • Setting up fail2ban to ban failed phpMyAdmin login attempts

    - by Michael Robinson
    We've been using fail2ban to block failed ssh attempts. I would like to setup the same thing for phpMyAdmin as well. As phpMyAdmin doesn't log authentication attempts to a file (that I know of), I'm unsure of how best to go about this. Does a plugin / config exist that makes phpMyAdmin log authentication attempts to a file? Or is there some other place I should look for such an activity log? Ideally I will be able to find a solution that involved modifying fail2ban config only, as I have to configure fail2ban with the same options on multiple servers, and would prefer not to also modify the various phpMyAdmin installations on said servers.

    Read the article

  • Deploying site on Amazon Beantalk and IIS settings

    - by Idan Shechter
    I am interested in working with Amazon Elastic Beantalk to deploy my new site. A few things that I need to know and can't get an answer to: 1) How can I maintain IIS settings of all deployed and future deployed machines? 2) If I can maintain, what happens if I change the settings on one server, will it automatically set it on other servers? 3) How can I backup the data. In other servers I usually make an AMI and deploy to a new server in case of a problem?

    Read the article

  • Access to NTP via IP which doesn't change often

    - by faulty
    I'm trying to sync the clock of our production server located in a data center with pool.ntp.org. For security reason, our servers has no internet access unless we requested to open specific ip/port explicitly. I worked out a list of IPs based on 0.asia.ntp.org 1.asia.ntp.org 2.asia.ntp.org 3.asia.ntp.org Not realizing ntp.org is using round robin DNS and the servers being voluntary, they changes from time to time. In fact the IP I've got from 3.asia.ntp.org last month is no longer working now. I'm wondering if there's a publicly known NTP server that doesn't change as often or if there's a way to go around this without having to request an update to the firewall on a monthly basis. I believe many admin is facing the same issue here.

    Read the article

  • CentOS Failover Cluster - SIOCADDRT: No such process (when adding a loopback)

    - by Steve Rolfe
    I'm trying to configure two web servers for a load balancing server. The load balancing aspect works fine (it sees both server, kills 'em if it needs to, and seems to direct traffic fine). The only issue is with the servers looping: /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=<Virtual IP> NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback Everytime I try a "service network restart" I get a SIOCADDRT: No such process when loading the loopback interface. Anyone have an idea what's causing this?

    Read the article

  • On setting up Apache and IIS to share the same IP

    - by miCRoSCoPiCeaRthLinG
    Hello, There are two different web-apps running on two (physically) different servers on our network... one of IIS and another one on Apache - both on port 80 since two machines are accessible by different IPs on our internal network. Now I want to expose both these services to the world. My idea is to somehow make the incoming connection redirect to the appropriate server based on user's choice of subdomain. Example xxx.domain.com maps to the IIS (Internal IP: 1.2.3.4) yyy.domain.com maps to Apache (Internal IP: 5.6.7.8) To the world, both these servers will share the same public IP. What kind of a configuration am I looking at and how do I go about trapping the subdomain requests and redirecting to the appropriate server? Thanks, m^e

    Read the article

  • Windows 2003 SBS: no more CAL sold

    - by Gregory MOUSSAT
    I just discovered a hidden unmanaged server into a remote location. This is a Windows 2003 SBS with 5 CAL per device. There is currently 12 computers connected. So I want to buy more CALs. But SBS 2003 CALs are not sold anymore. Neither SBS 2008 CALs, which can be downgraded to 2003. And 2011 CALs can't be downgraded. So no legal solution if we want to stay with 2003. Sort of programmatic obsolescence. We can upgrade the server to 2011. But I'd like to let him as is (I don't "repair" working servers, and this often lead to bigger problems, especially on those non managed servers). Anyone see another solution ?

    Read the article

  • Memcached - doesn't seem to be working

    - by Trev
    my local.xml <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> <prefix>MAGE_</prefix> <memcached> <servers> <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> </server> </servers> </memcached> </cache> /var/cache is still filling up memachced is running memcache 2685 0.0 0.3 351888 26152 ? Sl 08:07 0:19 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 How do i know its working? I notice no speed increases.

    Read the article

  • Apple XRaid questions

    - by luckytaxi
    I inherited an environment with a couple of apple xraid san. 1 - I have a 14 drive setup that's split into 5 LUNs on EACH side. The SAN goes into a fibre switch along w/ the servers that are attached to it. LUN masking is enabled on the SAN and as far as I know, there aren't any zoning on the fibre switch. Question, I have a server that's assigned two LUNs, one from each side of the controller. For some reason, it only sees one LUN (from the upper controller) and it doesn't see the one from the lower controller. The controller seems to be working fine as I have other servers attached to LUNs on the lower controller. 2 - I see a little "disclaimer" saying that any changes to the xraid will result in a reboot. So, if I add/remove hosts, this thing is going to reboot?!?!?!

    Read the article

  • Windows Server 2008 R2 Software Deployment on Active Directory - Schema Issue

    - by weedave
    We have two servers, one running Windows Server 2003 SP2 and one running Windows Server 2008 R2. Both servers have their own versions of Group Policy Management (1.0.2 on 2003 and 6.0.0.1 on 2008). We are wanting to migrate everything over to the newer 2008 server, including software deployment. However, when I try to add a new software package using a .msi file, I get the following error: "The schema for the software installation data in the Active Directory does not match the required schema." I have tried two separate software packages and get the same error on the 2008 server. However, when I do the same on the 2003 server, it adds the software package without any problems. The .msi files I am using are up-to-date - one is the most recent version of Google Chrome. Is this problem caused by the different versions of the OS, or the Group Policy Management program? How do we "upgrade" our Active Directory to allow software deployment on the 2008 server? Thanks.

    Read the article

  • Mysterious login attempts to windows server

    - by Jim Balo
    I have a Windows 2008R2 server that is reporting failed login attempts from a number of workstations on our network. Some event log details: Event ID 4625, Status: 0xc000006d, Sub Status: 0xc0000064 Security ID: NULL SID, Account Name: joedoe, Account Domain: Acme Workstation Name: WINXP1, Source Network Address: 192.168.1.23, Source Port: 1904 Logon Process: NtLmSsp, Authentication Package: NTLM, Logon Type: 3 (network) I believe this is coming from some netbios service or similar (maybe the file explorer), keeping an inventory of its network neighborhood and also trying to authenticate. Is there a way to turn this off without having to turn off file sharing all together? In other words, clients authenticating against file servers that they use is of course no problem, but I want to eliminate clients trying to authenticate to servers that they are not using and have no business with. The above example is only one of thousands of log alerts for similar failed network authentications. What can I do to clean this up / handle this? Thanks.

    Read the article

  • VmWare / Citrix Xen type environment vs Ubuntu Cloud / Amazon EC2 type environment.

    - by Nick Gorbikoff
    Hello. A bit of background. We run a small in house data center: about 20 virtualized servers (Debian Lenny, Windows 2003, Windows xp and Windows 7 machines), in a Citrix Xen pool running on 3 host servers and a SAN, plus a few standalone machines running legacy or specialized software that can't be vritualized. There is a big push everywhere now to move to cloud so we considering Ubuntu Cloud. I was wondering what are the pros / cons of running virtualized pool vs cloud to run all those machines? Thank you

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • Centralized sudo sudoers file?

    - by Stefan Thyberg
    I am the admin of several different servers and currently there is a different sudoers file on each one. This is getting slightly out of hand as quite often I need to give someone permissions to do something with sudo but it only gets done on one server. Is there an easy way of editing the sudoers file just on my central server and then distributing it by SFTP or something like that to the other servers in an easy way? Mostly wondering how other sysadmins solve this problem, since the sudoers file doesn't seem to be remotely accessible with NIS, for example. Operating system is SUSE Linux Enterprise Server 11 64-bit, but it shouldn't matter. EDIT: Every machine will, for now, have the same sudoers file. EDIT2: The accepted answer's comment was the closest to what I actually went ahead and did. I am right now using an SVN-supported puppet-installation and after a few headaches, it's working very well.

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • How to protect ejabberd from bruteforce attacks?

    - by Sergey
    It writes this in logs: =INFO REPORT==== 2012-03-14 17:48:54 === I(<0.467.0>:ejabberd_listener:281) : (#Port<0.4384>) Accepted connection {{10,254,239,2},51986} -> {{10,254,239,1},5222} =INFO REPORT==== 2012-03-14 17:48:54 === I(<0.1308.0>:ejabberd_c2s:784) : ({socket_state,tls,{tlssock,#Port<0.4384>,#Port<0.4386>},<0.1307.0>}) Failed authentication for USERNAME =INFO REPORT==== 2012-03-14 17:48:54 === I(<0.1308.0>:ejabberd_c2s:649) : ({socket_state,tls,{tlssock,#Port<0.4384>,#Port<0.4386>},<0.1307.0>}) Failed authentication for USERNAME It doesn't write IP with a failure. And strings "Accepted connection" and "Failed auth.." may even not stand nearby (as I think on heavily loaded servers) to be able to use fail2ban. What to do? And how jabber servers (using ejabberd) are protected?

    Read the article

  • the right way to do deployment with capistrano

    - by com
    I look for good practices for deploying with capistrano. I would like to start out with a short description how I used to do deployment. capistrano is installed locally on a developer's computer. I deploy thought gateway with capistrano option :gateway. Firstly, I thought that with :gateway option I need to have ssh connection only to gateway host, but it turns out that I need ssh connection (public key) to all hosts where I want to deploy to. I would like to find a convenient and secure way to deploy application. For example, in case when new developer starts working, is much more convinient to put his *public_key* only on gateway server and not on all applications servers. On the other hand I don't want him to have any connection to servers in particular ssh to gateway, just because he is developer, he needs to do only deployments. If you are aware of good practices for deploying with capistrano, please, let us know.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >