Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 130/537 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Remotely Trigger WSUS Downloaded update Installation

    - by Zypher
    This has been bugging me for a while. We have our Servers set to download only windows updates to stage them to be installed during one of our bi-monthly patch windows. I have looked high and low for a way to trigger the installation remotely on the servers during this time so that I don't have to log into a hundred or more servers and click on the "Install Updates Now" balloon. Anyone know of a way to trigger the update installations remotely?

    Read the article

  • how to optimize apache on web-server

    - by Prakash
    how can I optimize the server with following configuration. It takes too much time to load a page. IBM X3200 M3 Server - 1 Intel Xeon Processor with 4 GB Ram Below is my current configuration for apache: Start Servers: 5 (Default) Minimum Spare Servers: 10 Maximum Spare Servers: 20 Server Limit: 500 Max Clients: 500 Max Requests Per Child: 10000 (Default) Keep-Alive: On Keep-Alive Timeout: 5 Max Keep-Alive Requests: 100 Timeout: 200

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • How (much) is virtualization used today?

    - by BLAKE
    I know that where I have worked, I have pushed alot for virtualizing our servers. I think that it is much easier to implement and maintain than physical servers. I have been using Microsoft's Virtual Server 2005 R2 since it was released. Right now at my workplace we have 12 VMHosts that hold about 55 VMs. We have 6 other servers that we have been unable to convert to VMs. I want to know how other people in our field view virtualization. I know that I have had developers dislike the notion of VMs claiming major performance hits. What do other Sys Admins think about virtualized servers?

    Read the article

  • Simple Windows+Linux server provisioning? Chef/Puppet/Ansible etc

    - by Andrew
    I'm primarily a developer, part time devops; and manage servers here and there for my projects. I want to automate provisioning of web/app/database servers going forward for my projects I manage a mixture of both Windows and Linux servers (VPS, cloud and dedicated) I've looked at investigated Chef/Puppet/Ansible briefly; and I am wanting to find something that: Is easy to learn and understand. I don't want to invest weeks into understanding a complicated piece of tech. Ideally does not require a server ("master server") to hold the configurations Supports provisioning of Windows and Linux servers Comes with suitable documentation to get started Does anyone have any advice on what tool is best suited? Thanks

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • How faster is using an internal IP address instead of an external one?

    - by user349603
    I have a mailing list application that sends emails through several dedicated SMTP servers (running Linux Debian 5 and Postfix) in the same network of a hosting company. However, the application is using the servers' external IP addresses in order to connect to them over SMTP, and I was wondering what kind of improvement would be obtained if the application used the internal IP addresses of the servers instead? Thank you in advance for your insight.

    Read the article

  • poorman's redandunt domain name server array

    - by John
    Can I configure my domain, example.com's name servers as: ns1.dyndns.com ns2.dyndns.com ns1.opendns.com ns2.opendns.com That is, combining free dns services to create a redundant name server array? Note these name servers from different companies are not aware other companies' name servers also serve our domain. In case one company, say, ns1(2).dyndns.com is down, will people experience interruption when visiting my example.com? If one name server is unreachable, the next name server will be tried, or?

    Read the article

  • Log viewer server and client

    - by Scott Crooks
    I'm looking for a log viewing solution for (mostly) Linux and (preferably) Windows too. I want to be able to centralize the log information for a lot of servers so that people in the company can see what's going on for different servers. I would guess this would involve having a central server which accepts information from the various computers / virtual machines with (perhaps) a running daemon on each of the servers. Does such a software exist?

    Read the article

  • Virtualized Development Server for simulating 3-Tier Environment

    - by chris.cyvas
    Hello, I am thinking about buying a new server based development box for development (redundantly redundant, I know ;)). Ideally, I want to run something like ESXi or Xen Hypervisor at the lowest level. Then I want to add (at least) 5 Linux VM's for the following uses: 2 Web Servers 2 Application Servers 1 Database Server I want to load balance the 2 web servers and the 2 application servers and (somewhat obviously) they need to be all networked together to simulate a production environment. Also, it used to be the case that the recommendation was to put each VM on it's own hard drive, but I'm not sure that holds water anymore. Any advice? Does anyone have any advice on how to pull this off? Gotchya's, LookOuts!, etc? Thanks!

    Read the article

  • Nagios - How to display specific monitors for a specific user/contactgroup while these monitors will also be displayed to the Admin team?

    - by Itai Ganot
    I have a Nagios server which monitors many servers, a number of the servers is used for QA matters. I'd like to allow the QA team access to the Nagios UI and i want them to be able to view only monitors which are related to their work. More than that, these servers which i want the QA team to monitor should be displayed for the admins group as well (as it is configured at the moment) in addition to the QA team. Is that doable?

    Read the article

  • Is Tomcat Shared Session / Cluster between two machine possible?

    - by Snorri
    I have a setup of several Tomcat servers distributed between a few servers, all running the same thing. Apache is on top of Apache and a loadbalancer in front of the Apache servers. I want to cluster the Tomcats using Shared Session to minimize downtime and user interruption while deploying apps. I know clustering works within the same server but is it possible to setup Tomcat in a way that it shares sessions between servers on different machines? = Server 1 == Apache 1 === Tomcat 1 = Server 2 == Apache 2 === Tomcat 2 When Server/Tomcat 1 would be taken down, users and their sessions would transfer over to Server/Tomcat 2 and vice versa.

    Read the article

  • slave dns server, resolv.conf configuration

    - by Tim the Enchanter
    I have two servers (a master and slave) running DNS (bind). What should the /etc/resolv.conf file look like for the master and the slave? For example should the servers running the DNS have only : nameserver 127.0.0.1 or should they refer to the I.P. addresses of each server, as the other servers on the network do : search <mydomain>.co.uk nameserver 192.168.1.52 nameserver 192.168.1.57

    Read the article

  • Looking for a VNC (and or Remote Desktop) Profile Manager/Launcher

    - by zevlag
    I connect to many different computers via VNC and RDP. I'm looking for a windows client that can preferably do the following, though I'd accept software options that only meet some of the points: Save profiles (hostname, username, password, settings, etc.) Connect to VNC Servers Connect to RDP Servers Connect to SSH Servers (or ssh tunneling) Scan network for devices not in saved profiles Free (as in beer) Tools that do similar things, but not on windows: ARD on OSX iSSH on iPad Desktop Connect on iPad

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

  • system center operation manager role combinations

    - by KAPes
    We are evaluating system center operations manager 2007 R2 product, and would like to know what all roles can be combined onto single server. Like root management server and reporting servers can be on one box or not? my environment is like 450 servers mostly Exchange and Domain Controllers plus few OCS servers.

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • Seeking recommendations on resolving sporadic network connectivity latency for Notes client

    - by Russell Maher
    I have Domino servers in geographically disperse data centers in the U.S. Sometimes when I open an NSF on one of those servers the connection times out then when I open the NSF again it connects immediately. This has been going on for years and during that time I have upgraded and changed my own internet connection and moved servers to different data centers. Of course I have direct connection documents using fixed IP addresses. When I do a Notes client Trace nothing is out of the ordinary. My business partner experiences the same thing from an entirely different city and different ISP but to the same servers. Never have any trouble connecting to the HTTP server, just over port 1352. Does anyone have any recommendations on a process to determine what is causing this problem?

    Read the article

  • How To Check My Current Version of FFMPEG

    - by aamiri
    I have FFMPEG installed on 2 different servers. On one of the servers, i run into an issue every time i try to convert m4v files where ffmpeg just processes the file indefinitely. When I take the same source file and try to run it on the other server it seems to work just fine. Both servers are running the same version of GNU/Linux. Some one suggested i check to see if the same version of ffmpeg is installed on the servers, so my question to you all is, "how do i check my ffmpeg version?" Thanks!

    Read the article

  • Windows NLB + IIS - Stops serving pages

    - by Ye Ol Developer
    We are currently running Windows NLB and IIS7 load balanced across two servers. What happens is randomly and sporadically the servers stop serving web pages. What we have noticed is that if we run the sites on a dedicated IP on either of the servers, these issues do not exist. As soon as we switch back to the load balanced IP, then everything goes awry. When the servers stop serving pages, we can still TS into the server and surf them internally without issues, or switch to the dedicated IP. However the internal network cannot even access the files from the load balanced IP. We are running out of idea's here. Has anyone had a similar problem?

    Read the article

  • How to make RHEL have persistent local hdd name?

    - by Mxx
    I have 2 identical Dell R720 servers running identical Oracle Enterprise Linux(RHEL)6.4. Both servers (supposedly) configured in exactly the same way. However, one of the servers is behaving differently. Every other reboot its local HDD name(and related partitions) flip from /dev/sda to /dev/sdj. This is problematic because both servers are configured for multipathd, and if this flip happens their config does not match and Oracle DB(or its clusterware) complains that nodes are not configured identically. Why does one server has a consistent device names while the other server keeps flipping back and forth? How can I make local hdd to consistently be /dev/sda? I suspect this might have something to do with udev but I'm not sure.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • SSL certificates: how to use it?

    - by Rod
    I have a central server and I want to purchase a SSL certificate for it. The architecture is based on this central server and many connected web-servers which are on the client-side (one for each user). A client could access both the main server and its local server. Moreover the two servers exchange data between them. I would like client's web browser to trust all servers and always activating https and a secure connection when connecting to them. Assuming I can name all servers on the same domain name (I was thinking about a wildcard certificate anyway), which kind of certificate or use of it can make these secure connections working? There is the possibility that main server and client side server are not connected for a while. Is possible to activate an https connection for a client to its local server in this case? When I will need to renew or change the certificate, I would like to change it just on the main server avoiding to have the need of touch all the servers on the side of clients. Can I do that in some way?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >