Search Results

Search found 821 results on 33 pages for 'redhat'.

Page 11/33 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why does my $LD_LIBRARY_PATH get unset when using screen with bash?

    - by UltraNurd
    This is related to http://superuser.com/questions/27376/why-does-my-ld-library-path-get-unset-launching-terminal, but a different set of symptoms. First, /usr/bin/screen is setuid as per the other question. Second, the default shell on this system is /bin/tcsh for various historical reasons, and we're not allowed to chsh to /bin/bash, so I typically run bash manually immediately after login. Third, I almost always use screen, but I want ctrl-a ctrl-c in screen to create a new bash "tab", so I always invoke bash first. That is: {~} $ echo $SHELL /bin/tcsh {~} $ bash [~] echo $SHELL /bin/bash [~] screen -U [~] ...and when reconnecting: {~} $ echo $SHELL /bin/tcsh {~} $ screen -dUr [~] echo $SHELL /bin/bash [~] However, my $LD_LIBRARY_PATH is there in tcsh, there in bash, but empty once I run screen; it is still present if I just run screen from tcsh, but then I get new tcsh "tabs" when I use ctrl-a ctrl-c in screen. Any ideas?

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • How to relax firewall for UDP connections/ports for a specific IP address?

    - by Gnanam
    Hi, My server is Red Hat Enterprise Linux Server release 5. iptables version is v1.3.5. I want to allow all UDP connections / port for the IP address 192.168.0.200. This IP address is configured in my eth0. So basically I want to set it up the same as my local loopback (127.0.0.1) UDP traffic. What is the iptable command to allow all UDP connections / ports for IP 192.168.0.200?

    Read the article

  • Can Linux report IP conflicts?

    - by James
    If a Red Hat or other Linux host has a conflict with an IP address, is there a way to have it display a warning message on the console or in a log, like Windows and MacOS do? i.e. my computer has an IP address configured (or received from a DHCP server) but finds another device on the network using the assigned address. Will it log the conflict?

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • Meaning of iptables filter restriction

    - by Gnanam
    Hi, My server is Red Hat Enterprise Linux Server release 5. I'm not an expert in Linux iptables firewall. After installation, I find the following entries in /etc/sysconfig/iptables. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT What does this iptable filter restriction rule mean?

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • Netinstalling CentOS if the gateway is in a different subnet

    - by James Lawrie
    I have a KVM host (A) running a virtual machine (B). They each have their own external IP address and the networking is setup using bridging between eth0 and br0 on A. B uses eth0, with A being the gateway. The problem is that the two external IP addresses are on different subnets (different /8s in fact) so by default, B claims it cannot reach A (Network Unreachable). I can resolve this by adding a static route on B: echo "any host gateway_ip dev eth0" > /etc/sysconfig/static-routes Modifying /etc/init.d/networking to reload the gateway after applying static routes (I only added the final line before fi): if [ -f /etc/sysconfig/static-routes ]; then grep "^any" /etc/sysconfig/static-routes | while read ignore args ; do /sbin/route add -$args done route add default gw "${GATEWAY}" fi If I then restart networking, it comes online. How can I do this (or work around it some other way) prior to the system being installed, ideally inside an Anaconda kickstart file?

    Read the article

  • LDAP Authentication fails with 500 or 401 depending on bind for Apache2

    - by Erik
    I'm setting up LDAP authentication for our Subversion repository hosted through Apache on a RHEL 5 system. I run into two different issues when I try to authenticate against Active Directory. <Location /svn/> Dav svn SvnParentPath /srv/subversion SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" AuthLDAPBindPassword "mypass" AuthLDAPUrl "ldap://my.example.com:389/ou=User Accounts,dc=my,dc=example,dc=com?sAMAccountName?sub?(objectClass=user)" NONE Require valid-user </Location> If I use the above configuration it continually prompts me with the Basic prompt and I have to eventually select Cancel, which returns a 401 (Authorization Required). If I comment out the bind parts it returns 500 (Internal Server Error), griping that authentication failed: [Mon Nov 02 12:00:00 2009] [warn] [client x.x.x.x] [10744] auth_ldap authenticate: user myuser authentication failed; URI /svn [ldap_search_ext_s() for user failed][Operations error] When I perform the bind using ldapsearch and filter for a simple attribute it returns correctly: ldapsearch -h my.example.com -p 389 -D "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" -b "ou=User Accounts,dc=my,dc=example,dc=com" -w - "&(objectClass=user)(cn=myuser)" sAMAccountName Unfortunately I have no control or insight into the AD part of the system, only the RHEL server. Does anyone know what the hang up is here?

    Read the article

  • mount: mount to NFS server 'IPADDRESS' failed: RPC Error: Program not registered

    - by matt74tm
    I've got two Redhat5/CentOS systems which share a folder. I'm trying to change the shared folder location, but I ran into this error on the machine on which the folder is mounted... How can I correct this? I rebooted the computer but to no avail. Server1 - where its "mounted" /etc/fstab IPADDRESS2:/opt/programA/common/files /srv/server2-share nfs rw,intr 0 0 Server2 - where its "shared" /etc/exports /opt/programA/common/files IPADDRESS1/28(rw,insecure,sync,no_root_squash) Ran the following on Server2 root@server2 [~]# /etc/init.d/nfs start root@server2 [~]# rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd root@server2 [~]# /etc/init.d/nfs status rpc.mountd (pid 10204) is running... nfsd (pid 10201 10200 10199 10198 10197 10196 10195 10194) is running... rpc.rquotad (pid 10189) is running...

    Read the article

  • Error while installing phabricator using http://www.phabricator.com/rsrc/install/install_rhel-derivs.sh

    - by Saurav Shah
    The command thats run is yum install httpd git php php-cli php-mysql php-process php-devel php-gd php-pecl-apc php-pecl-json mysql-server I get these errors. How do I fix these? Error: Package: php-devel-5.3.3-3.el6_2.6.x86_64 (rhel6-optional) Requires: php = 5.3.3-3.el6_2.6 Available: php-5.3.3-3.el6.x86_64 (rhel6-base) php = 5.3.3-3.el6 Installing: php-5.3.3-14.el6_3.x86_64 (rhel6-updates) php = 5.3.3-14.el6_3 Error: Package: php-process-5.3.3-3.el6_2.6.x86_64 (rhel6-optional) Requires: php-common = 5.3.3-3.el6_2.6 Available: php-common-5.3.3-3.el6.x86_64 (rhel6-base) php-common = 5.3.3-3.el6 Installing: php-common-5.3.3-14.el6_3.x86_64 (rhel6-updates) php-common = 5.3.3-14.el6_3

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • Can't login to a new mysql user

    - by mostar
    Hi, When I create a new Mysql user, it is impossible to login using this user and password. Only if I crate a user without a password I can login. For example: mysql -u root -phererootpass grant all privileges on mydb.* to testuser@'%' identified by '' with grant option; grant all privileges on mydb.* to testuser2@'%' identified by 'mypass' with grant option; FLUSH PRIVILEGES; exit; mysql -u testuser #<<< work fine mysql -u testuser2 -pmypass #<<< fail to login ERROR 1045 (28000): Access denied for user 'testuser2'@'localhost' (using password: YES) </code> I'm using Mysql 5.0 on Red Hat v5 Please advice Mostar

    Read the article

  • Install PHP mcrypt on Red Hat 4

    - by Chris
    I'm having a very hard time getting mcrypt for PHP installed on a Red Hat 4 server. I've downloaded the rpm but it tells me: error: Failed dependencies: php-common(x86-32) = 5.4.7-2.fc18 is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(FileDigests) <= 4.6.0-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 libc.so.6(GLIBC_2.4) is needed by php-mcrypt-5.4.7-2.fc18.i686 libltdl.so.7 is needed by php-mcrypt-5.4.7-2.fc18.i686 rtld(GNU_HASH) is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 So when I try to install one of those packages, they also require another 8 packages. So I'm diving into dependency hell here. Now if I try to compile mcrypt from source, this is what I get: checking for libmcrypt - version >= 2.5.0... no *** Could not run libmcrypt test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means LIBMCRYPT was incorrectly installed *** or that you have moved LIBMCRYPT since it was installed. In the latter case, you *** may want to edit the libmcrypt-config script: no configure: error: *** libmcrypt was not found But I was able to install libmcrypt from an rpm packages successfully. Any suggestions? Also, I cannot use up2date as it requires an active paid account from Red Hat and since the staff has changed rather rapidly in the last year where I work, no one knows if there even was any support accounts.

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

  • Red Hat server minimal install

    - by chmeee
    In a farm of virtualized Red Hat servers, there's the need to install a minimal system for security reasons. Minimal installs have serveral advantages (even no security related): Lees exposure to vulnerabilities (if you don't need it, don't install it) Better update process (less packages to update, less probability of breaking the system) Better performance (no unneeded daemons or processes) The less software you have the easier it is to harden the system Unfortunately, this is not easy because the "Minimal Installation" on Red Hat contains lots of unnecessary packages. There is an added challenge as the farm is running Oracle iAS. I've been told that iAS has dependencies with local graphical envieronment. So finally every server in the farm has gnome, X, etc. I've been searching the web and one solution seems to be making a kickstart script that will intall only the necessary packages. But I find this difficult and have several doubts about how to maintain the system dependencies afterwards. How do you install minimal Red Hat servers? Is it Ok to use kickstart or will I have dependency problems in the installation or in updates? Is there any way to avoid installing the graphical environment for iAS?

    Read the article

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • logrotate by size outside the daily schedule

    - by Josh Smeaton
    We have a couple of applications that generate huge log files. It's not enough to rotate those logs daily, so I created the following logrotate conf: /var/log/ourapp/*log { compress copytruncate missingok size 200M rotate 10 } The idea is that we can keep 2GB of logs for this one application, no matter how quickly those files are filling up. The problem, though, is that logrotate only runs once daily. AFAIK, when logrotate kicks off at 4am, it will check to see that the size is at least 200M and rotate it if so. Ideally logrotate would run every minute, check the size, and rotate if the size is greater. Is there a standard way for rotating based on size outside of the daily cron schedule?

    Read the article

  • How to set up "vi" shell environment as default

    - by Ency
    It could by silly question, but I can not find answer anywhere. So, I'd like to use vi (you know set -o vi can do the trick) as default in my shell instead of emacs, but I do not want to put it into bash startup scripts. Why? Because I work as verification engineer and I am using several user accounts, which are also quite often reinstalled. Changing of default profile is not answer too, because some of software creates its own home directory (independent on default profile).

    Read the article

  • unable to run OpenOffice from text mode

    - by dilip
    I have installed OpenOffice in Redhat5 in text mode. When I tried to start OpenOffice using the command: sudo /usr/lib/openoffice/program/soffice "-accept=socket,host=localhost,port=8100; urp;StarOffice.ServiceManager " -nologo -headless -nofirststartwizard: " It shows the error saying javaldx: Could not find a Java Runtime Environment! So I installed jre in my system, and then I did not get any error but OpenOffice does not start. Also I checked the process regarding OpenOffice, but I haven's seen any process related to that. Can any one help me to fix this issue?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • How can I update fontconfig to a newer version in Red Hat 5.3?

    - by user16654
    I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >