Search Results

Search found 25651 results on 1027 pages for 'shell script'.

Page 710/1027 | < Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Wicked VNC Viewer acting out on Windows desktop and CentOS 6.3 server

    - by Johnny Lee
    What we have here is the only way to open the TightVNC viewer on this Windows XP desktop is to have a TigerVNC viewer open on the CentOS 6.3 server desktop. I know it sounds really weird and we’re looking for hints to make it go away. Any ideas? Here is the recipe: We are using Putty on the Windows desktop as SSH (Secure Shell) and a Terminal Emulator. We open and login to Putty then open a login to TightVNC viewer. After many failed attempts, much Googling, and lots of reading to no avail I decided to open the TigerVNC viewer on the CentOS 6.3 server by way of the GNOME desktop Application menu -- Internet tab. After opening and logging into the TigerVNC viewer on the CentOS 6.3 Server, Voila!! We have a remote desktop opened on the server. But what was an interesting discovery was that the TigerVNC viewer on the server had a request on the desktop that was not on the server desktop. This turned out to be a login request that once the password was entered it opened the TightVNC viewer on the Windows desktop. Weird huh? -Why is that password request showing up on the CentOS 6.3 server in the TigerVNC viewer as oppose to showing up on the Windows desktop when logging in using TightVNC viewer to the server?

    Read the article

  • Nagios: NRPE: Unable to read output, Can't find the reason, can you?

    - by Itai Ganot
    I have a Nagios server and a monitored server. On the monitored server: [root@Monitored ~]# netstat -an |grep :5666 tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN [root@Monitored ~]# locate check_kvm /usr/lib64/nagios/plugins/check_kvm [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm -H localhost hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm NRPE: Unable to read output [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# ps -ef |grep nrpe nagios 21178 1 0 16:11 ? 00:00:00 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d [root@Monitored ~]# On the Nagios server: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 [root@Nagios ~]# When I check another server in the network using the same command it works: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running [root@Nagios ~]# Running the check locally using Nagios account: [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ Running the check remotely from the Nagios server using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 -bash-4.1$ Running the same check_kvm against a different server in the network using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running -bash-4.1$ Permissions: -rwxr-xr-x. 1 root root 4684 2013-10-14 17:14 nrpe.cfg (aka /etc/nagios/nrpe.cfg) drwxrwxr-x. 3 nagios nagios 4096 2013-10-15 03:38 plugins (aka /usr/lib64/nagios/plugins) /etc/sudoers: [root@Monitored ~]# grep -i requiretty /etc/sudoers #Defaults requiretty iptables/selinux: [root@Monitored xinetd.d]# service iptables status iptables: Firewall is not running. [root@Monitored xinetd.d]# service ip6tables status ip6tables: Firewall is not running. [root@Monitored xinetd.d]# grep disable /etc/selinux/config # disabled - No SELinux policy is loaded. SELINUX=disabled [root@Monitored xinetd.d]# The command in /etc/nagios/nrpe.cfg is: [root@Monitored ~]# grep kvm /etc/nagios/nrpe.cfg command[check_kvm]=sudo /usr/lib64/nagios/plugins/check_kvm and the nagios user is added on /etc/sudoers: nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_kvm nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_nrpe The check_kvm is a shell script, looks like that: #!/bin/sh LIST=$(virsh list --all | sed '1,2d' | sed '/^$/d'| awk '{print $2":"$3}') if [ ! "$LIST" ]; then EXITVAL=3 #Status 3 = UNKNOWN (orange) echo "Unknown guests" exit $EXITVAL fi OK=0 WARN=0 CRIT=0 NUM=0 for host in $(echo $LIST) do name=$(echo $host | awk -F: '{print $1}') state=$(echo $host | awk -F: '{print $2}') NUM=$(expr $NUM + 1) case "$state" in running|blocked) OK=$(expr $OK + 1) ;; paused) WARN=$(expr $WARN + 1) ;; shutdown|shut*|crashed) CRIT=$(expr $CRIT + 1) ;; *) CRIT=$(expr $CRIT + 1) ;; esac done if [ "$NUM" -eq "$OK" ]; then EXITVAL=0 #Status 0 = OK (green) fi if [ "$WARN" -gt 0 ]; then EXITVAL=1 #Status 1 = WARNING (yellow) fi if [ "$CRIT" -gt 0 ]; then EXITVAL=2 #Status 2 = CRITICAL (red) fi echo hosts:$NUM OK:$OK WARN:$WARN CRIT:$CRIT - $LIST exit $EXITVAL Edit (10/22/13): Following all that, I am now able to get some response from the script: [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 It seems like the problem is some how related to the check_nrpe command or something which is related to the nrpe installation on the server.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Menu tab completion for recent history in zsh

    - by dat5h
    I am interested in a potential zle widget for zsh. Is there a way to build a widget that mimics the kill-completion selectable menu? Essentially I want to be able to press , tab in vi-command-mode, or maybe !-tab-completion at the shell and get a list of recent history (or related history compared what is already entered at the commandline) that allows me to scroll through it and possibly select a relevant function to call or compare similar calls. Looking through the manual I stumbled onto a similar widget that I have mapped like so: # tab completion history menu (vicmd) autoload -z history-beginning-search-menu zle -N history-beginning-search-menu-space-end history-beginning-search-menu bindkey -M vicmd "\t" history-beginning-search-menu-space-end # emacs binding could be "\e\t"? (I wouldn't know) Therefore, if I enter vicmd and hit tab when I enter something like "grep", then I get a list of all grep calls in history. It also asks me for the list-number and it will perform the numbered item in history. If I enter a space and then try this, it lists ALL of my history history. This is fairly close to what I want, but there are some problems. For example, 1) it prints the entire list of relevant history and does not check the number of lines of the screen so it could easily blow up the space on the terminal; 2) when I type in numbers for selecting an item in history it does not show me the numbers I type, so I may make a mistake and have to start over again; 3) I would love to be able to hook in appearance tweaks. I was wondering if there exists more updated version of this widget or if there is any way to look at the source for kill-completion or history-beginning-search-menu to see if I could think of a way to do it.

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • Apache cyclic redirection problem

    - by slicedlime
    I have an extremely weird problem with one of my sites. I run a number of blogs off a single apache2 server with a shared wordpress install. Each site has a www.domain.com main domain, but a ServerAlias of domain.com. This works fine for all the blogs except one, which instead of redirecting to www.domain.com redirects to domain.com, causing a cyclic redirection. The configuration for each host looks like this: <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot "/home/www/www.domain.com" <Directory "/home/www/www.domain.com"> Options MultiViews Indexes Includes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> As this didn't work, I tried a mod_rewrite rule for it, which still didn't redirect correctly. The weird thing here is that if i rewrite it to redirect to any other domain it will redirect correctly, even to another subdomain. So a rewrite rule for domain.com that redirects to foo.domain.com works, but not to www.domain.com. In the same way, trying to redirec to www.domain.com/foo/ ends me up with a redirection to domain.com/foo/. Even weirder, I tried setting up domain.com as a completely separate virtual host, and ran this php test script as index.php on it: <?php header('Location: http://www.domain.com/' . $_SERVER["request_uri"]); ?> Hitting domain.com still redirects to domain.com! Checking out the headers sent to the server verifies that I get exactly the redirect URL I wanted, except with the "www." stripped. The same script works like a charm if I replace www. with foo or redirect to any other domain. This has now foiled me for a long time. I've diffed the vhosts configs for a working domain and the faulty one, and the only difference is the domain name itself. I've diffed the .htaccess files for both sites, and the only difference is a path related to the sharing of wordpress installation for the blogs: php_value include_path ".:/home/www/www.domain.com/local/:/home/www/www.domain.com/" I searched through everything in /etc (including apache conf) for the domain name of the faulty host and found nothing weird, searched through everything in /etc for one of the working ones to make sure it didn't differ, I even went so far to check on the DNS setup of two domains to make sure there wasn't anything strange going on. Here's the response for the faulty one: user@localhost dir $ wget -S domain.com --2010-03-20 21:47:24-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:47:24 GMT Location: http://domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://domain.com/ [following] And a working one: user@localhost dir $ wget -S domain.com --2010-03-20 21:51:33-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:51:33 GMT Location: http://www.domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://www.domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://www.domain.com/ [following] I'm stumped. I've had this problem for a long time, and I feel like I've tried everything. I can't see why the different domains would act differently under the same installation with the same settings. Help :(

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • Why from a virtualized Ubuntu system I can't discover the ip address of my router?

    - by AndreaNobili
    I am not into computer network and I have the following problem finding my router IP address. I have a Windows 8 PC on on which it is installed VmWare Workstation that virtualizes Linux Ubuntu. The network adapter settings of this Virtual Machine is setted as NAT. Now my problem is that if in the Windows 8 DOS shell I perform the ifconfig statment I obtain C:\Users\Andrea>ipconfig Configurazione IP di Windows Scheda Ethernet tap0: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: techub.lan Scheda Ethernet Connessione di rete Bluetooth: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda LAN wireless Connessione alla rete locale (LAN)* 11: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda LAN wireless Wi-Fi: Suffisso DNS specifico per connessione: DSL2750B Indirizzo IPv6 locale rispetto al collegamento . : fe80::89ff:6d12:49cf:4354%13 Indirizzo IPv4. . . . . . . . . . . . : 192.168.1.3 Subnet mask . . . . . . . . . . . . . : 255.255.255.0 Gateway predefinito . . . . . . . . . : 192.168.1.1 Scheda Ethernet Ethernet: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda Ethernet VMware Network Adapter VMnet1: Suffisso DNS specifico per connessione: Indirizzo IPv6 locale rispetto al collegamento . : fe80::edb3:8352:f954:2b0c%23 Indirizzo IPv4. . . . . . . . . . . . : 192.168.129.1 Subnet mask . . . . . . . . . . . . . : 255.255.255.0 Gateway predefinito . . . . . . . . . : Scheda Ethernet VMware Network Adapter VMnet8: Suffisso DNS specifico per connessione: Indirizzo IPv6 locale rispetto al collegamento . : fe80::d00b:8c6e:98b:f1ec%24 Indirizzo IPv4. . . . . . . . . . . . : 192.168.15.1 Subnet mask . . . . . . . . . . . . . : 255.255.255.0 Gateway predefinito . . . . . . . . . : Scheda Tunnel Teredo Tunneling Pseudo-Interface: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda Tunnel isatap.techub.lan: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: techub.lan Scheda Tunnel isatap.{5B95051D-79AB-4147-92CF-3A2E16698432}: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda Tunnel isatap.{340A5FAD-1597-402E-B658-29C37E8F7BC2}: Stato supporto. . . . . . . . . . . . : Supporto disconnesso Suffisso DNS specifico per connessione: Scheda Tunnel isatap.DSL2750B: Suffisso DNS specifico per connessione: DSL2750B Indirizzo IPv6 locale rispetto al collegamento . : fe80::5efe:192.168.1.3%26 Gateway predefinito . . . . . . . . . : So, looking at the previous output it appear clear that the default gateway (my router) is: 192.168.1.1, infact if I open it into a browser it apear to me the login mask to enter in the router settings.... Ok, if now I open the virtualized Ubuntu shell and perform the route command I obtain this output: andrea@andrea-virtual-machine:~$ route Tabella di routing IP del kernel Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.15.2 0.0.0.0 UG 0 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 192.168.15.0 * 255.255.255.0 U 1 0 0 eth0 So, here it say to me that the default gateway is 192.168.15.2 (that is not my router ip address), why? My idea is that it could depend by the NAT. Because my Windows system is connected using the wireless but in the virtualized Ubuntu I see that I am connected to a "wired network". So I think that the NAT virtualize a network adapter (or something like this) and that 192.168.15.2 could be the ip address of this network adapter... But it seems strange to me because, as you can see in the previous ipconfig output the VmWare network adapter addresses are: 192.168.129.1 and 192.168.15.1. So I have also 2 others doubts: 1) What device represents the 192.168.15.2 ip address that the virtualized Ubuntu see as Default gateway but that is not my router? 2) What exactly do the two VmWare network adapter that I have configured into my Windows 8 system? There is a way to discover my router ip from the virtualized Ubuntu system ? Tnx Andrea

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • Environment variables in bash_profile or bashrc?

    - by Viriato
    I have found this question [blog]: Difference between .bashrc and .bash_profile very useful but after seeing the most voted answer (very good by the way) I have further questions. Towards the end of the most voted, correct answer I see the statement as follows : Note that you may see here and there recommendations to either put environment variable definitions in ~/.bashrc or always launch login shells in terminals. Both are bad ideas. Why is it a bad idea (I am not trying to fight, I just want to understand)? If I want to set an environment variable and add it to the PATH (for example JAVA_HOME) where it would be the best place to put the export entry? in ~/.bash_profile or ~/.bashrc? If the answer to question number 2 is ~/.bash_profile, then I have two further questions: 3.1. What would you put under ~/.bashrc? only aliases? 3.2. In a non-login shell, I believe the ~/.bash_profile is not being "picked up". If the export of JAVA_HOME entry was in bash_profile would I be able to execute javac & java commands? Would it find them on the PATH? Is that the reason why some posts and forums suggest setting JAVA_HOME and alike to ~/.bashrc? Thanks in advance.

    Read the article

  • copSSH and cygwin - Can't use windows style paths

    - by DrFredEdison
    I setup copSSH on one of my windows servers, and within the copSSH bash shell, I can't seem to use windows-style paths to remove and copy files. If I do try, I get the following: $ /bin/cp -r C:/Domains/_temp/collage_push/* C:/Domains/collage/ cygwin warning: MS-DOS style path detected: C:/Domains/_temp/collage_push/ Preferred POSIX equivalent is: /cygdrive/c/Domains/_temp/collage_push/ CYGWIN environment variable option "nodosfilewarning" turns off this warning. Consult the user's guide for more details about POSIX paths: http://cygwin.com/cygwin-ug-net/using.html#using-pathnames I have created a windows environment variable CYGWIN set to nodosfilewarning. It has no effect. I added export CYGWIN=nodosfilewarning to my .bashrc and doing a echo $CYGWIN in my ssh session confirms it is indeed getting set; yet again, it has no effect finally, I noted that when not doing my own export that CYGWIN contains "nontsec binmode" (no quotes), so I tried: export CYGWIN="nodosfilewarning nontsec binmode" in my .bashrc and still no dice. Older versions of CopSSH didn't have this issue. How can I actually override this error? I have a lot of scripts that already use windows-style paths, and I'd rather not change them if possible.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Ubuntu 10.04 Server on Hyper-V Server R2 has sluggish install and command line

    - by Paul Hobart
    I've installed Ubuntu Server 10.04 (64 bit) on a Hyper-V Server R2. I've encountered two issues that I think are related: Very slow install Very slow command prompt The text-mode installer goes through a series of text-based prompt windows. It takes 7-10 seconds for each of these windows to draw on the screen. The end result is that every time I answer a prompt and hit enter I wait for 15 seconds while the screen redraws line by line. I can literally see each line of text being drawn (like the old 300 baud modems days). Once done installing, scrolling on the command line is super slow. For instance, if a simple command, like "ls", causes the screen to scroll, it will scroll very slowly. This happens on a fresh install. The server functions as a LAMP server and an OpenSSH server, but that's it (I don't even have any Virtual Hosts set up yet). AND this only happens on the Virtual Machine console. I access the console through Hyper-V Manager and don't have this problem on any of my other Virtual Machines. Also, this problem does NOT happen when accessing a shell through OpenSSH. How can I improve this performance issue?

    Read the article

  • How do I troubleshoot a segfault in Ubuntu that occurs when typing a bogus command?

    - by Alan
    We've got a production server running Ubuntu 11.10. We're encountering segfaults that appear under various conditions. The simplest reproducible case is when we login to an ssh session as our administrative user and enter a bogus command. You'd expect the standard "command not found" error message. Instead, we get a segfault in python. The user's default shell is /bin/bash. For example: $ asdf Segmentation fault Info from /var/log/syslog: Jul 6 15:39:20 PROD001 kernel: [2155960.605695] python[7873]: segfault at 0 ip (null) sp 00007fffd030b808 error 14 in python2.7[400000+233000] Some details about the server: $ uname -a Linux PROD001 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/issue Ubuntu 11.10 \n \l Before we ask the IT department to reinstall the O.S., I'd like to understand what got us here. The system and/or this particular user's environment is suspect. Many people have touched this server over the past year, so I'm wondering if it is missing libraries, incorrectly installed packages, etc. I'm hoping that if we can understand what's going wrong in this case, it will help explain why we're getting segfaults in a couple of other scenarios. Any tips on troubleshooting this segfault will be appreciated!

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Server reporting incorrect mime type for css files

    - by Becky
    We have a VPS server that we host our websites on. I have written a CMS using CodeIgniter. On one of the interfaces, I am attempting to upload a css file to the system. This worked correctly when we had it hosted on shared hosting. Since we've moved it to the VPS, I am getting an "incorrect filetype" error. It all comes down to the fact that the server is reporting a mime type of text/x-c for the css file rather than text/css. I logged in via shell and ran the following command on an existing valid css file (to make sure it wasn't an issue with either CodeIgniter or with php). file --brief --mime 'filename.css' 2>&1 The server gave me the following in response to my command: text/x-c; charset=us-ascii My question ... is there some sort of server setting that I need to tweak to get the server to correctly identify the css file as text/css? Do I just have to add a mime type for the css files to the server? I found the mime types file (etc/mime.types), and it just hase video types and a couple other that I have no idea what they are. There is nothing in there for css or images or html files. Unless I'm looking in the wrong spot. I'm not a server person, so I'm hoping someone can help me out. Some server specs: Apache/2.2.22 (Unix) php 5.3.13 Server API = CGI/FastCGI the fileinfo php extension appears to be disabled

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • Removing SCIM input method as default from gnome terminal

    - by Mark
    Hello - I am recently back into the Linux world after about a 10 year absence. While I can find my way around most things, terminals and desktop managers are different than I remember. One of the biggest problems that I am encountering today is that when running a gnome terminal (this is Suse 10.0 enterprise), I'm getting behavior in the window that I don't want. Specifically, when I type, my typing is underlined as if something is trying to spell check my window. Further, it seems as if when running vi or less, my keystrokes are only processed by these apps when I hit 'return'. I.e. if I'm running less and want to go back a page, I'll hit b, but nothing happens until I hit 'return'. I seem to have tracked this down to the 'input method". Right clicking in the Gnome terminal allows me to set my input method to one of a dozen values. It seems that currently, it's set to "SCIM Input Method". If I then select 'default' or 'X Input Method', apps (i.e. things like less, vi, and even the bash shell) behave as I would expect. Can someone tell me a) what is this SCIM input method b) how can I make it so that it is not the default? I've poked around various configuration files in my home directory as well as in /etc, but I can't see to find how this is set. I guess as a final question, can I just get rid of SCIM? Or is that tied into the window manager somehow? I do appreciate any clarifications that I can get. Thanks.

    Read the article

< Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >