Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 109/393 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • SSH - SFTP/SCP only + additional command running in background

    - by Chris
    there are many solutions described to get ur SSH-connection forced to only run SFTP by modifying the sshd_config by adding a new group match and give that new group a Forcecommand internal-sftp Well that works great but i would love to have a little more feature. My servers automatically ban IP's which try to connect often in a short time. So when you use any SFTP-Client, which opens multiple connections to work faster it can get banned instandly by the server for a long time. The servers have a script to whitelist users by administrator. I've modified this script to whitelist the user, which runs the script. All i need to do is now get the server to execute that script, when somebody logins. On SSH it's no problem, just put it in .bashrc or something like, but the Forcecommand don't runs these scripts on login. Is there any way to run such a shellscript before or at the same time as the Forcecommand get fired?

    Read the article

  • Moving to Data Center

    - by Won
    Please, give me any advice. Our company has decided to move servers to a data center since we are having a major network traffic jam. The data center provides 100MB bandwidth and full 42 unit cabinet for us. Right now, I am planing to have two firewalls for a failover and changing DNS informations for a web server. Is there anything that I have aware of before I move them to the data center? 1. Web Server 2. Exchange Server 3. SQL Servers

    Read the article

  • Multiple Session using port 1081 in one box using SSH

    - by regmaster
    Hi Guru's, I am setting Linux Hopping Station to another different servers. My current config to connect to another servers is using different port to connect. e.g ssh -D 1080 -p 22 [email protected] ssh -D 1081 -p 22 [email protected] Now what I would like to have to share the same port from the same box. ssh -D 1080 -p 22 [email protected] ssh -D 1080 -p 22 [email protected] But when I share it, I will get below error: bind: Address already in use channel_setup_fwd_listener: cannot listen to port: 1080 Could not request local forwarding. How could I configure the same port? help. thank you. I want to share the same port because this is needed when configuring firewall in Citrix Firewall on other machine, not needed to many many ports and keep changing when changing connection. thank you.

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

  • Can't connect Tomcat via JMX

    - by Icarokun
    I couldn't connect to one Tomcat server via JMX in a Linux virtual machine. There was no firewall running. All seemed ok. By searching on the web I found out that I have to use the -Djava.rmi.server.hostname property to fix it. It worked, but I don't understand why. My machine has five Tomcat servers running, all of them have JMX enabled in consecutive ports (8008, 8018, 8028...), all of them have the same configuration and only one of them had this issue connecting JMX. No firewall, no -Djava.rmi.server.hostname property in any Tomcat. I understand the problem, but I don't understand why four of my Tomcat servers worked and one of them not. Why is this?

    Read the article

  • Remotely enter encryption key?

    - by Jason Swett
    This might be a really dumb question but here goes, anyway. I just bought a couple servers. I already installed Ubuntu with encrypted LVM on one and I'm planning on doing the same with the other. This means that every time I boot up each of these machines, I have to enter the passphrase. And I'll have to do this every morning because I'll power each machine off each night for security reasons. Here's the problem: I don't have monitors or keyboards for these servers. It seems to me I have two options: Somehow enter the passphrase remotely Buy a KVM switch I doubt #1 is an option but I want to make sure it's not before I buy a KVM. Is it possible to enter the passphrase remotely? AND is it a good idea?

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • QPS for dnscache

    - by vedaprasad
    I have 2 internal DNS servers ( ns1 & ns 2 ) on ubuntu 12.04 which run dnscache , and my clients resolv.conf have something like nameserver ns1 nameserver ns2 nameserver 8.8.8.8 since all my load is taken by ns1 , where as ns2 sits idle until ns1 is down or not serving my request . i would like to add these 2 server under a LB VIP . but my network team wants to know the QPS of the ns servers so that their LB is loaded . so is there any way to find out the QPS of dns queries running Dncache

    Read the article

  • Where to run Java and PHP code continuously

    - by az1112
    I'm sorry if this is the wrong forum for a question like this, but you have to start asking questions somewhere to get anywhere :) The question is pretty simple: I need a server where I can run Java and PHP scripts 24 hours a day, 7 days a week. I need to be able to access this server via SSH, and I need to be able to retrieve the data generated by scripts using SCP. Also, I need to be able to run 10-20 scripts simultaneously. What is the name of the thing I should be looking for? Is it a dedicated server? I'm confused for 2 reasons: 1) because there seem to be all kinds of servers out there; 2) because most companies advertising dedicated servers seem to be aiming them at people who want to host websites. But I don't want to host a website; I just want to run my code.

    Read the article

  • Request to server x Reply from server y

    - by klaasio
    I need some advice from you guys: I'm dealing with a custom loadbalancer/software for which we will use 2 main servers and about 8 slave servers. In short: User sends request to main server, main server will receive and handle the requests, sends a request to a slave server and slave server should send data DIRECTLY to the "user". User - Main server Main server - Slave server Slave server - User The reason for which data should be send directly to the user and not through the main server is because of bandwidth and low budget. Now I have the following idea's: -IPinIP, but that is not possible in Layer7 (so far i know there some expensive routers for that) -IP Spoof, using C/C++ we will make it look like the reply came from main server. But I was thinking, perhaps the reply "slave server - User" could just come from a different IP without causing issues in the firewall from the user or his anti-virus. I don't know so well about "home" firewalls/routers and/or anti-virus software. I guess the user machine wouldn't handle it well?

    Read the article

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

  • Resolving Domainnames differently for different services

    - by mlaug
    Some time ago we had an issue with our network infrastructure and php with curl. Our Network infrastructure is fairly simple. LoadBalancer/Firewall = 5 servers The Domainname of our website is set to the ip of the Loadbalancer, of course. But calling curl from one of the servers did result in a timeout. It appears that a server could not call for its own domain it is serving. So we had to set the domains via /etc/hosts to the sever itself. But now We have implemented a Varnish in front of the Loadbalancer, which we want to automatically purge, once a change on a page happens. So now we need to call the domain www.example.com/url_to_purge. Sadly this call what be resolved to the server itself instead of the varnish, because of the /etc/hosts entries. So now I am wondering, if you could resolve domain names differently for different services :)

    Read the article

  • Configuring a backup DNS server

    - by mattyh88
    I would like to setup a backup DNS server. I have added all my name servers in my domain name panel. (ns1.domain.com, ns2.domain.com, etc ...) If a person would try to go to domain.com and the first name server would fail, will it automatically try and use ns2.domain.com? All my DNS servers have the same master zones configured. Is that the way to go? Is it that easy or am I missing something here? :)

    Read the article

  • Make a server ( other than the router ) to be the default gateway for a subnet

    - by powerguy123
    I am trying to make a server ( lets call it server_A) which is different from the router to be the gateway for a subnet. Why do I want this ? I want to host a loadbalancer on server_A using LVS-NAT, and I dont want to implement a V-Lan or IP-IP tunneling. I have modified the routing tables of the remaining servers on the subnet to use server_A as the gateway. I have set server_A to not send ICMP reroute packets. But most traffic from servers in that subnet to outside clients are still being sent through the original gateway, bypassing server_A. Is there any other configuration I need to set in order to achieve my goal ?

    Read the article

  • Why would TCP wrappers stop working for sshd?

    - by toby1kenobi
    On a couple of CentOS 5 servers sshd seems to have become 'unwrapped' - previously I was using TCP wrappers and hosts.allow/hosts.deny to control access, but these are now not being used. If I execute $ldd /usr/sbin/sshd | grep libwrap $ it outputs nothing, whereas on servers where TCP wrappers are still working I see libwrap.so.0 => /lib64/libwrap.so.0 (0x00002b2fbcb81000) Does anyone know what might cause this, or how it could be rectified? Updated As requested: $ rpm -qV openssh-server S.5....T c /etc/pam.d/sshd S.?....T c /etc/ssh/sshd_config S.5..... /usr/sbin/sshd

    Read the article

  • HAproxy to web host sub directory?

    - by daemonza
    Hi for reasons outside my control, I need to load balance two servers, that run a non-virtual host enabled app on IIS. Normally in HAProxy I would load balance servers(apache, tomcat, etc) like this : acl is_www_example_com hdr_end(host) -i www.example.com use_backend www_example_com if is_www_example_com backend www_example_com balance roundrobin cookie SERVERID insert nocache indirect option httpchk HEAD / HTTP/1.0 option httpclose option forwardfor server node1 192.168.1.1:80 cookie node1 server node1 192.168.1.2:80 cookie node1 Which will route to the node 1 and node 2 server and serve up the virtual host site. if I need to route to www.example.com/application/data How would I be able to do it, with the above example, if at all even possible?

    Read the article

  • Is it possible to extend the Active Directory schema in a Windows 2003 DC (NOT R2) to support DFSR?

    - by JohannesH
    We're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

  • Data storage solutions for rapidly running out of space

    - by Grimlockz
    I have 2 web servers (1 live and other backup), the issue I have is our storage is rapidly running out. All the data on the server is used by our customers and new documents are uploaded to the server daily. So nothing can be deleted as it's always in use. We use a flat file structure with no database. I'm seeking solutions or ideas for the best place to move the our data to. The data has to be secure and needs to run on a linux environment. Not sure where to start - clusters, vmware, or they such solutions for huge file servers?

    Read the article

  • How do you host images using Windows Server so that they are accessible over the internet? [closed]

    - by nairware
    I was trying to figure out a way to host images (picture images, not disk images) such that they are accessible over the internet via URLs--in a way similar to a web service like Photobucket or ImageShack. I have a whole bunch of Windows Servers (Windows Server 2008 R2) available in the cloud. Instead of hosting images using Photobucket or ImageShack, I wanted to host this images directly on my own Windows cloud. This could be really complicated or really simple. I have no idea, as I know very little about IIS 6 (which is what I am using) or web servers. If this is too broad of a question (as there are probably multiple ways of implementing this), is there at least some guide or documentation of how someone else has setup image hosting? Perhaps a step-by-step guide of at least one way to do it?

    Read the article

  • Which is the recommended filesystem for VMware Server / ESXi?

    - by elitalon
    We have a couple of servers in office with VMware Server as virtualization solution. We are planning an upgrade of our infrastructure. Some servers will remain with VMware Server, but we want to migrate some others to VMware ESXi. In both cases we are making a fresh install, and I wonder if there any suggestion/guidelines regarding the host filesystem and its partitions. EDIT: We are using local storage instead of SAN/NAS external storage, because we are not sure if it is worth it to use them given our office size/requirements.

    Read the article

  • Tool to automate basic connectivity testing

    - by feicipet
    After our vendors have setup a certain test environment, we need to go in to perform connectivity testing between PC to servers and also between servers. The problem is that we run a range of tests to telnet between 2 nodes on several ports and this is a manual and rather tedious process. Does anyone know of a small tool or script that I can take input on the range of ports to be test and will run an automated range of testing against those ports? All I need to do is to validate whether a TCP connection can be established from the source PC / server to the target server IP address / port. Thanks, Wong

    Read the article

  • redundant http load balancer

    - by jrydberg
    Got a simple scenario with two web servers for redundancy and to scale. But how do I make a two web-server setup fully redundant? I can think of two solutions; two web servers, one load balancer spreading the load. one extra machine for the load balancer. but how will the load balancer be redundant? two machines, each running the web server AND running a load balancer, spreading the load over. have a DNS entry point to both of the machines. no extra machines needed for load balancing. How do you guys normally solve this kind of problem?

    Read the article

  • HTTPS load balancing based on some component of the URL

    - by user38118
    We have an existing application that we wish to split across multiple servers (for example: 1000 users total, 100 users split across 10 servers). Ideally, we'd like to be able relay the HTTPS requests to a particular server based on some component of the URL. For example: Users 1 through 100 go to http://server1.domain.com/ Users 2 through 200 go to http://server2.domain.com/ etc. etc. etc. Where the incoming requests look like this: https://secure.domain.com/user/{integer user # goes here}/path/to/file Does anyone know of an easy way to do this? Pound looks promising... but it doesn't look like it supports routing based on URL like this. Even better would be if it didn't need to be hard-coded- The load balancer could make a separate HTTP request to another server to ask "Hey, what server should I relay to for a request to URL {the URL that was requested goes here}?" and relay to the hostname returned in the HTTP response.

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >