Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 497/609 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • SSH from ubuntu server to Windows 2008 repeatedly asks for password

    - by jrizos
    I am trying to setup GIT using SSH mode. The central GIT repository is on a NAS device running Windows 2008 server and the user GIT repository is on ubuntu 12.04. When I try to SSH to the windows machine however I am not able to successfully get in. SSH keays are not setup but I think the problem is even before that since I cant get in just by providing the correct password. The output from the SSH command is below. Any help would be appreciated. dba@clpserv01:~$ ssh -v -l administrator clpnas OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to clpnas [***.***.***.***] port 22. debug1: Connection established. debug1: identity file /home/dba/.ssh/id_rsa type -1 debug1: identity file /home/dba/.ssh/id_rsa-cert type -1 debug1: identity file /home/dba/.ssh/id_dsa type -1 debug1: identity file /home/dba/.ssh/id_dsa-cert type -1 debug1: identity file /home/dba/.ssh/id_ecdsa type -1 debug1: identity file /home/dba/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze2 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA bd:37:d1:98:51:2a:d6:b5:f5:c7:98:d8:74:2c:4e:cd debug1: Host 'clpnas' is known and matches the RSA host key. debug1: Found key in /home/dba/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/dba/.ssh/id_rsa debug1: Trying private key: /home/dba/.ssh/id_dsa debug1: Trying private key: /home/dba/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,password,keyboard-interactive Password:

    Read the article

  • How is the extra mSATA SSD disk used/configured in a Dell XPS laptop?

    - by Mark
    Some machines in the new XPS laptop range from Dell come with a regular, large (500GB+) HDD and an additional 32GB m-SATA SSD. The only detail I can find about this extra drive on the Dell site is this: Store your important files, multimedia and photos with XPS 15’s large hard drive options. To get instant access to your media, choose an optional mSATA solid-state drive (SSD) that can boot up to twice as fast as a regular hard drive and resumes in less than 1 second. I'd like to know more about how this extra drive is set up and used, specifically: Is anything installed on it (e.g. OS files or a boot loader) or is it just used as swap space? Is the m-SATA drive visible as a lettered drive in Windows? (I'd guess not if it's used for swap file only.) Is this unusual configuration likely to cause any problems later down the line - e.g. when upgrading to Windows 8? As usual, Dell's sales team haven't been able to help. If anyone's actually got a Dell machine with this or a similar hard drive set-up and can give a definitive answer rather than speculation I'll accept the answer.

    Read the article

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Apache directory access with virtual host

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Require valid-user

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • FreeBSD jail IMAP/MTA config recommendations

    - by kobame
    I've got access to my "own" FreeBSD jail. The jail has only basic, unconfigured system, but I have full access to FreeBSD ports, and (jail)root too. Now I need to setup my jail as IMAP/MTA. The question: What packages are EASIEST for config and later administration, (the simplest possible setup, with the minimum needed configuration) when: i haven't any preferences (don't know any yet) my (one) domain is managed by ISP, so don't need DNS need only IMAP for few users (up to 20 mailboxes) need secure transport layer (IMAPS/993) password auth, no LDAP, no kerberos, nor databases, nothing like fancy things... need easy-setup easy-admin MTA, with simplest possible password SMTP auth, (again no LDAP, nor DB), secure transport layer but would be nice have virus-scan and some anti-spam protection So, what ports I should install for MTA and IMAP? MTA (Sendmail, Postfix, Exim)? antivirus (ClamAV) antispam??? IMAP(S), (Dovecot, Courier) when the main criteria are: easy setup, and easy administration. When I googled I found only complicated setups for thousands of users with LDAP, databases and so on - too big-caliber for my small (easy?) needs. Any pointer to an easy howto is very welcomed.

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

  • Issues connecting to HP ProCurve switches

    - by BriGuy
    We are having a very strange issue trying to connect to our infrastructure switches via SSH. When you first try connecting to them, the switches will prompt for the password - and then just sit there after it is entered. If you create a second SSH session to the switch (while letting the first one remain open and just sitting there) it will let you log right in. The switches are doing the same thing with RADIUS and local authentication. The other strange part to all of this, is that about 10 switches started doing it all at the same time. As far as actual configuration of the switches, nothing has changed. Occasionally, one switch will start working like normal, but then stop again. These are all HP ProCurve managed switches, but all different models/firmware. Some switches that are not working are using the same firmware as others that are working. UPDATE: 20130312 I am also seeing this same behavior when trying to use telnet. The first telnet session just hangs there, and the second telnet session will let me log in. Rebooting the switches seems to get them working, but I still have 5 production switches that cannot easily be rebooted because of their production roles. Is anyone aware of anything else that can be switched on/off that may reset the logon for remote management or something like that?

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Windows 2008 R2 DNS cant resolve own SOA

    - by user46742
    We have two Domain Controllers for our network. They both run DHCP, DNS, and ADS. They are both VM's sitting on MS Hyper V Server 2008 on separate physical hosts. We had our primary DC go down a week ago. I upgraded an already existing VM to Primary DC and built a new VM for the secondary. Both DNS servers are running and the SOA is configured correctly for Primary DC 1. However when I run the best practice analyzer it states the server cannot resolve it's own SOA. Check the configuration in the adapter. I checked and they are configured properly. I also went through the DNS entries thoroughly and made sure there was no records of the previous DC that went down. NSLOOKUP resolves the domain and primary dc fine. I also checked the firewalls on the machines and our physical firewall for any deny packets. Any suggestions? I appreciate any help!

    Read the article

  • Outlook 2010 Crashing Unpredictably

    - by cbkadel
    Very often when I open up Outlook 2010 and start doing actions in it, it will hang and become non responsive. I have tried letting it finish, but it never does come back (up to 20 minutes of letting it try). I generally have to restart Outlook and try again. Usually after about an hour of doing this, Outlook somehow snaps out of it and works for the rest of the day. It's generally in the morning (though I doubt that's the key variable). Generally, the emails that cause problems are HTML/formatted, but not always. What I've done so far to troubleshoot: Install Latest Outlook Hotfix (I think Dec 14, 2010) Start Outlook in Safe Mode Neither of those steps seem to make a difference. Usually - after about 10-15 restarts of Outlook on any given day, then it starts working thereafter. My next step is to uninstall/reinstall Office 2010, but I'm hoping someone has seen this and knows what to do about it - though not sure. My configuration is like this: Microsoft Online Services (using Microsoft's Sign In App) - Connecting to Exchange I have two other Exchange accounts in this profile (new feature in 2010) connected through Outlook Anywhere. Life Meeting Conferencing Add In I've disabled the People tab/add in. I've disabled the "Send to Bluetooth" add-in. Not sure what else to do?

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >