Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 65/389 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Experience with MQ File Transfer Edition?

    - by mfinni
    We've got several processes that move files across servers - SFTP, FTP, SCP; Windows, Linux, AIX; there is a workflow component (usually require a control file with filenames and hash values to move a batch of related files). The action is often initiated on our servers to get the files, so we need to make sure they're done being written. We have some homegrown scripts to do this, but they don't always work properly, and troubleshooting, maintenance, and log review is not easy this way. There's a lot of servers, and our scripts don't have central logging or a dashboard/console/etc. We're looking into commercial products to do this. Has anyone used MQ File Transfer Edition? Another team in our company is using Aspera, does anyone have any thoughts on that, or other favored products? I have no idea what our budget is for this, yet. Just trying to get a handle on the product space from the perspective of other admins.

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • How do you manage updates without a staging environment: CentOS 6.3

    - by Gregg Leventhal
    I am managing about 20 servers, many of them virtual. They are almost all different purpose, and none are clustered. I have a distributed LAMP stack, a few application servers, some build servers, a few KVM hosts. They are CentOS 6.3 mostly with a few Ubuntu (unfortunately). I don't have the resources to setup a staging environment where I can have duplicates of my machines and test updates before rolling them out. I am taking file backups. What I want to know is how you are approaching backing up your Linux systems. I assume you don't just do yum update, but then how are you choosing the packages worthy of updating? When (if ever) are you updating the kernel, etc.. How do you test updates without a staging environment? Snapshot and hope for the best?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • how to create stub DNS zone for emulating my customer production environment ?

    - by Albert Widjaja
    Hi All, Is it possible to emulate my customer production environment inside my AD domain by just creating the same domain inside my primary DNS server ? Can I created mycustomer.com DNS zone (STUB) just for the sake of listing few database servers and application servers and then for the other DNS records eg. MX, NS and the other refer to the REAL MX record entry so that my Exchange Server email flow is unaffected to mycustomer.com ? because if I just create A record in my current domain for some of the servers, the FQDN is not exactly what I want. Thanks.

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • How can I check the location of perl and CPAN files?

    - by Rob
    I constantly have to set up new servers for an employer of mine for an exact purpose of his, and as such they all have to be set up in exactly the same way. So I've created a script in PHP that I run from my own box to automatically send over all the relevant files, compile everything, run updates, and everything else. However, for some reason these brand new servers come with perl, which is fine, but they have perl installed in different locations. This makes it a pain for me to copy over Config.pm for CPAN without going in and finding the location manually. Is there perhaps some command I'm unaware of that will hunt down the precise location? If it helps, usually the servers are CentOS 5

    Read the article

  • How to allow an internal server accept remote connections not through RD Gateway

    - by Matt Ahrens
    So, I help administrate a collection of servers running various windows server environments. We have a RD Gateway server, properly configured, to gatekeep for us. It does not have the other servers listed in it's server farm category, though. I just added a refurbished server for a non-profit development environment that is sharing the rack space and port. I would like this server to be accessible via remote connection, but not require RD gateway certification (I cannot add the users for this development server to our gateway since they do not work for the organization hosting the rack.) Is there any way for me to add this dev. server as an exception to which servers should require RD Gateway clearance, or otherwise let users bypass RD gateway credentials for this one machine? Thanks, and let me know if I am misinformed on how RD gateway works or anything. I am still learning.

    Read the article

  • How long do uploaded files stay in the tmp folder in Linux Ubuntu?

    - by Jean-Nicolas Boulay Desjardins
    I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to. I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

  • Best client and server antivirus for 5 user office?

    - by drpcken
    I'm setting up an Active Directory environment for 5 users (very small) and I'm wondering what is the best antivirus for clients (Windows 7) and servers (Server 2008 R2 x64)? I use Symantec Corp at my organization (50+ users) but I think that is overkill for this company. I wanted to use Microsoft Security Essentials for the clients (I use it for home machines and it's the best free AV in my opinion) but I don't think it will work on the Servers (3 servers, PDC, TS, and File). They are behind a Sonicwall TZ 200. What would be the best. Free would be even better. Thank you!

    Read the article

  • Unable to map to web folder using WebDAV client on Windows Server 2008 R2

    - by user74989
    Hello, I have a client running Windows Server 2008 R2 on several servers. One of the servers is also running SharePoint 3.0 and my client has created a web folder to map to. I can map to the web folder from all Server 2008 R2 boxes that have the WebDAV client (part of Desktop Experience feature) installed, except for the server the folder resides on. When I attempt to map to the web folder on the server which the folder resides, I am repeatedly prompted to enter my credentials. I am using the same account that I used to map the web folder on the other servers. I have also tried mapping from the command line and receive 'Access Denied' What may be causing the problem? I would think that if I can map to the drive from one server, I should be able to map the drive from the rest as long as the WebDAV client is installed, especially on the server where the folder is located. Jesse

    Read the article

  • Lightweight tool for viewing raw HTTP messages?

    - by rewbs
    Hi, I'm investigating differences in behaviour between a couple of Web servers. I need to see raw response data from the servers (i.e. before the response is de-chunked if it has "Transfer-Encoding:chunked" and before it is decompressed if it has "Content-Encoding:gzip"). I can find plenty of simple HTTP client that nearly do what I need (e.g. Poster, RESTClient), but they tend to decode the response one step too far. Network analysers like Wireshark give me what I need but are a bit heavyweight. Telnet is my best bet so far, but is a bit too simplistic (actions like capturing data or entering requests are a bit laborious). Can anyone recommend a good, lightweight tool for sending / viewing the raw data that constitute HTTP messages? Edit: I should add that I'm on Windows. Also, the tool would need to work both with remote and local servers.

    Read the article

  • Redundant Router and Load Balancing vs. DDoS attack

    - by colgatta
    With a small server farm at a hoster with great support and conditions, I worry about the increasing number of DDoS attacks against this hoster (not my web project, but other clients on the same location). I have booked a redundant router and load balancer as managed service with this hoster to share the load with all the dedicated servers. However, I was lost again today because another one's project was attacked with DDoS for hours :-( Each hour means hundreds of dollars loss whenever my adserver and tracking is not reachable. Even time-out advertising have to be paid by me but can not be resold to my clients without the servers being available. All the time, the servers, the load and traffic is OK and health, but no chance to keep this stable/online if the hoster is vulnerable. Anyone has ideas or suggestions how to protect - even against DDoS?

    Read the article

  • cluster of services and restarting on package upgrade

    - by Marcin Cylke
    I'm using puppet to manage a bunch of servers. Those servers run a simple service - exposed to the world via load balancer. That service's instances are independent in that they can run on their own, are are deployed on multiple servers to increase responsiveness. Now, when I push a new package to repo and puppet catches up with it appearing there it just updates this package on all services. This results in a short downtime of entire service. Is there a way of configuring puppet to do restart the services sequentially? Or using any other kind of strategy?

    Read the article

  • Web Farm Application deployment best practices

    - by rauts
    Hi All, We are having a web farm which hosts multiple ASP.Net applications. We typically have 4 servers on the farm. The dilemma which i am having is in terms of capacity issue of the farm. Lets say i have currently got 200 apps in total. Should I deploy all 200 apps on all 4 servers (i.e. all the servers in the farm are identical) or should i split the applications between 2 sets of server and create 2 smaller farms so that i can then manage the application based on its criticality and usage etc. Any best practices in this area would be highly appreciated. Thanks Rauts

    Read the article

  • Best Server Ghost-Like Tool For Windows

    - by John Dibling
    I'm looking for advice on which tool we should use to clone servers. In the short term, we will be cloning identical hardware but in the long run we may want to create one image and replicate that on a different class of machine. For example, as new servers are released from Dell, we will want to continue to use the same image we already made. Right now our servers are Windows (Server 2008 & Server 2008 R2), but moving forward we may need Linux support as well. Ghost Solution Suite 2.5 seems to be the canonical tool. Are there alternatives? Recommendations/reviews?

    Read the article

  • NAT and NGINX on the same server

    - by Morten
    I'm setting up a VPC cluster for my collaborative todo list application www.getdoneapp.com. To have my servers on the private network I need a NAT server so my servers on the private network can connect to the internet to receive updates and what not. The NAT server will consume an elastic IP address, so I'm wondering if I can just have that NAT server run nginx to direct traffic to my internal servers for HTTP. So the question is, is it a bad idea to run NGINX and NAT on the same server, or should I go for consuming 2 elastic IP addresses?

    Read the article

  • Command to determine whether ZooKeeper Server is Leader or Follower

    - by utrecht
    Introduction A ZooKeeper Quorum consisting of three ZooKeeper servers has been created. The zoo.cfg located on all three ZooKeeper servers looks as follows: maxClientCnxns=50 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 server.1=ip1:2888:3888 server.2=ip2:2888:3888 server.3=ip3:2888:3888 It is clear that one of the three ZooKeeper servers will become the Leader and the others Followers. If the Leader ZooKeeper server has been shutdown the Leader election will start again. The aim is to check if another ZooKeeper server will become the Leader if the Leader server has been shut down. Question Which command needs to be issued to check whether a ZooKeeper server is a Leader or a Follower?

    Read the article

  • Windows VPN - NO internet access

    - by sharru
    I host a network of servers behind a Fortigate 200a firewall in the DC. I connect to those servers via a VPN connection. The problem is that when i connect to the VPN, I lose my internet connection on the local PC (windows 7). I would like to be connected to the VPN and still surf the web. i guess this means to only forward a range of ip to the VPN connection. I've read other answers on serverfault, talking about "un-check the 'Use default gateway on remote network' option in your Windows 7 PPTP network connection settings". When i do that , i get internet access but no access to the servers in the VPN. Any idea how to get both working? Should i change something on the fortigate 200a config? Do i need two networks cards? Is there a place in windows to define ip range for the vpn connection?

    Read the article

  • Any way I can correct DNS spoofing against our domain

    - by brandon
    This morning I found out that our domain and subdomains have been poisoned on the 4.2.2 and 4.2.2.1 DNS servers along with others I think, though I have not confirmed others yet. Using OpenDNS resolution works correctly. I have updated our local DNS servers and cleared their cache which has fixed things internally. The issue is that the domain is public facing and customers are having problems. We are the authoritative DNS server for the domain and all that is under our control. What I don't know how to do is fix the name servers out of our control. Is there something we can do on our end? At the moment the only workaround I can think of is to ask customers to change their DNS to OpenDNS which is not very practical. The other workaround would be to change our TLD, which is less practical.

    Read the article

  • Tools to manage clusters

    - by Stan
    Say if there're many game servers, is there any tools for engineers to easily manage? Below are some requirements. allow RDP (remote desktop) to servers. has group/permission setting. Classify by different functionality. So for people has permission to access certain group, they don't need further enter pwd to RDP servers, the tool will automatically log on the server. log activities: history about who has log on what server. Thanks.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >