Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 436/991 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Script to run chown on all folders and setting the owner as the folder name minus the trailing /

    - by Shikoki
    Some numpty ran chown -R username. in the /home folder on our webserver thinking he was in the desired folder. Needless to say the server is throwing a lot of wobbelys. We have over 200 websites and I don't want to chown them all individually so I'm trying to make a script that will change the owner of all the folders to the folder name, without the trailing /. This is all I have so far, once I can remove the / it will be fine, but I'd also like to check if the file contains a . in it, and if it doesn't then run the command, otherwise go to the next one. #!/bin/bash for f in * do test=$f; #manipluate the test variable chown -R $test $f done Any help would be great! Thanks in advance!

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • How to configure multiple addresses on a single interface using Fedora 16

    - by cg.
    I upgraded from Fedora 14 to 16, recently. I had two static IP v4 addresses configured on my ethernet interface by creating two files in /etc/sysconfig/network-scripts: ifcfg-eth0 -> first address ifcfg-eth0:1 -> second address After the upgrade, this resulted in an error message during the boot process and in only the second address being successfully configured on the interface. So, what is the correct way to configure multiple addresses on a single interface on Fedora 16? I could not find anything on this subject in the documentation so far.

    Read the article

  • ssh -x : howto get clipboard?

    - by Gupu User
    Hello! I'm connected to a server via ssh -x and my only way to get text out of the system is the x clipboard (unless i want to take thousends of screenshots and OCR over it). I can not execute any programs on the other machine, because i don't have access. How can I achive this?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • What does *:* in netstat output stands for?

    - by chello
    While executing the command /usr/sbin/lsof -l -i -P -n under root user, I am getting this output. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... httpd 9164 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9164 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9164 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) httpd 9180 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9180 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9180 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) Please let me know what does *:* stands for? I am interested to know both the ipaddress and port fields. Also what does (CLOSED) mean here?

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • What is the meaning of the 'Personalities' feature under /proc/mdstat

    - by drcelus
    On some systems I see this : Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[0] 10485696 blocks [2/2] [UU] md2 : active raid1 sdb2[1] sda2[0] 477371328 blocks [2/2] [UU] And other systems show : Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204788 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 4193272 blocks super 1.1 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 483985276 blocks super 1.1 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk I wonder what is the meaning of Personalities and the impact of having different values.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • Backing up to smaller drive

    - by Dave
    In a few hours I'll have a new 500GB Sony laptop, filled with the usual Sony rubbish which I'll promptly be replacing with Ubuntu or Crunchbang or something. However, first I want to make a full clone of the drive (including recovery partitions), should I wish to return it to Sony or sell it on in its factory state. The problem is that the only backup drives I have are less than 500GB - the biggest I have is 250GB or so! So I need to backup and compress on-the-fly. What's the best way to do this? Presumably dd piped into gzip would do the trick, or does anyone have any other suggestions to accomplish this?

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • How to map Ctrl + ',' to greater key( '>') or Ctrl + '.' to less key( '<' ) using xmodmap?

    - by Maxrunner
    So im trying to creating a combination of keys to generate the ISO key for Portuguese layout, the key in question is the <, pressing it normally will generate the '<' character, pressing + shift will generate the ' ' character. So i'm trying to create a combination while using xmodmap, and i want this to work for all programs.I've been searching on Google and came up with this example for Control + P = Up: Control + p = Up arrow example The example for that behaviour is: xmodmap -e "keycode 33 = p P Up" keycode 33 matches the p key, so where does control comes up in that command? regards,

    Read the article

  • blocking port 80 via iptables

    - by JoyIan Yee-Hernandez
    I'm having problems with iptables. I am trying to block port 80 from the outside, basically plan is we just need to Tunnel via SSH then we can get on the GUI etc. on a server I have this in my rule: Chain OUTPUT (policy ACCEPT 28145 packets, 14M bytes) pkts bytes target prot opt in out source destination 0 0 DROP tcp -- * eth1 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED And Chain INPUT (policy DROP 41 packets, 6041 bytes) 0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED Any guys wanna share some insights?

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • QNAP (469L) with Debian: can't connect to router

    - by agtoever
    I've been running my QNAP 469L with Debian (Wheezy deb7u3) for a few months. Yesterday I upgraded the memory to 4 GB. The system boots fine, but since the upgrade, I'm not able to connect the server to my router (a TP-Link WR941ND). My configuration: The router runs a DHCP server (192.168.67.100 and up), with a preconfigured ip address for the QNAP (192.168.67.10). The router is on 192.168.67.1. As said, Debian is installed on the QNAP (which can be regarded as a normal computer). Networking hardware on the QNAP: Intel PRO/1000 Network Connection using the e1000e kernel module. This is what I have tried so far: Replace the network cable (tried 3 different cables on different router ports). Check for messages from the kernel: dmesg | grep eth. Besides the normal hardware messages I get a ADDRCONF(NETDEV_UP): eth0: link is not ready for each call to ifup. Manually restart the network sudo server networking restart Check sudo ifconfig (eth0 is up, but no ip addresses). Check the /etc/network/interfaces which has (besides the loopback device) an allow-hotplug eth0 and iface eth0 inet dhcp, which is afaik the default Debian configuration. Since the server has two ethernet ports, I checked if I'm using the right port (checked the hardware address that ifconfig reports for eth0 is the same as the hardware address that is in the preconfigured ip address for the server in the router. Do a manual sudo ifdown eth0 && sudo ifup eth0 with no results (but an extra ADDRCONF(NETDEV_UP): eth0: link is not ready in the kernel log) Do a dhcp request dhclient -v eth0: for about a minute requests are send (according to the terminal) and at the end I get a No DHCPOFFERS received. No working leases in persistent database - sleeping.. Check the router system log if DHCP requests are received. I see them for some devices (my Mac, my iPhone) but not from the QNAP. The log entry looks like: DHCPS:Recv REQUEST from 84:85:06:07:75:6A and then a DHCPS:Send ACK to 192.168.67.101. There are no records from the QNAP's hardware address. So the two error messages that I do get are: ADDRCONF(NETDEV_UP): eth0: link is not ready for every ifup and No DHCPOFFERS received. No working leases in persistent database - sleeping. for every DHCP call.

    Read the article

  • puppet onlyif specified nodes

    - by Valintinr
    I'm trying to write a puppet template. I have a puppet-master and a few puppet-agents and they all must be divided. I think it's good to do this by the node's hostname. But when I tried to do this I've encountered an error "puppet-agent[169037]: (/Stage[main]//Exec[adduser]) Could not evaluate: Could not find command 'ru1'" see code below exec { 'adduser': command => 'sudo adduser -m -p pawSfQewWrUAA test -G wheel', path => [ '/bin','/usr/bin' ], onlyif => "$hostname == ru1" } I need to specify this task for only one node with the hostname ru1. So have can I do this? Thanks.

    Read the article

  • File command output is different for same file on diff machine

    - by Coka
    I get different output of file command on same file(checked inode) from different machines. One of the machines is with suse10 sp3 and the another - rhel4. machine1>file x.tcl x.tcl: ASCII English text machin2>file x.tcl x.tcl: data Even in vi editor same file look different from different machine. Any clue? One more thing - there's third machine suse10 sp3 works fine. Is this machine issue?

    Read the article

  • Setting differing ACLs on directories and files

    - by durandal
    Quick ACL question: I want to set up default permissions for a file share so that everyone can rwx all of the directories and so that all newly created files are rw. Everyone who is accessing this share is in the same group, so this isn't a concern. I have looked at doing this via ACLs without changing all of the users' umasks and such. Here are my current invocations: setfacl -Rdm g:mygroup:rwx share_name setfacl -Rm g:mygroup:rwx share_name My problem is that while I want all of the newly created sub-directories to be rwx, I only want newly created files to be rw. Does anyone have a better method to achieve my desired end-result? Is there some way to set ACLs on directories separately from files, in a similar vein to "chmod +x" vs. "chmod +X"? Thanks

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >