Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 426/991 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Why does deleting from the command line take significantly less time than from a GUI?

    - by Jordan Plahn
    So this is probably the dumbest question you'll read today, but it's something I just wondered about as I was deleting a dozen or so images from my computer. With a quick rm -rf command on the directory's contents, all the images were gone in a snap. When I drag the same dozen or so images to a trash can/recycle ban, it takes sometimes 10 seconds or more. Now I'm sure some of it comes from the overhead of the GUI and such, and some of it may be the fact that the file still "exists" in some form if it's put into the recycle bin, but is there anything else that accounts for such a huge time disparity? Are "rm" and "delete" just such fundamentally different commands so I'm trying to compare apples and oranges? Enlighten me, please!

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • Rpm removal does not remove delivered dirs and leaves garbage

    - by Jim
    I deliver an application via an RPM. This application delivers various directories and files. E.g. under /opt/internal/com a file structure is being copied. I was expecting that on rpm -e all the file structure delivered under /opt/internal/com will be removed. But it does not. There are directories in the file structure that are non-empty. Is this the reason? But these (non-empty) directories were created by the RPM installation. So I would expect that they would be "owned" by RPM and removed automatically. Is this wrong? Am I supposed to remove them manually?

    Read the article

  • Why is 'grep -i' so slow? How to do it faster for ASCII?

    - by Vi.
    Consider: $ time lzop -d < tvtropes-index.lzo | egrep -B 5 '[Dd][eE][sS][cC][eE][nN][dD] ?[Ff][rR][oO][mM]' real 0m0.438s $ time lzop -d < tvtropes-index.lzo | egrep -B 5 'descend ?from' -i real 0m11.294s Both search case insensitively. Why is the -i version so slow? How do I make grep -i fast without entering things like [iI][nN] [tT][hH][iI][sS] [wW][aA][Yy]? For example, perl -ne 'print if /descend ?from/i' works fast, but '-B 5' is not as trivial to implement as in grep (as well as other options).

    Read the article

  • File/folder Write/Delete wise, is my server secure?

    - by acidzombie24
    I wanted to know if someone got access to my server by using a nonroot account, how much damage can he do? After i su someuser I used this command to find all files and folders that are writeable. find / -writable >> list.txt Here is the result. Its most /dev/something and /proc/something and these /var/lock /var/run/mysqld/mysqld.sock /var/tmp /var/lib/php5 Is my system secure? /var/tmp makes sense but i am unsure why this user has write access to those folders. Should i change them? stat /var/lib/php5 gives me 1733 which is odd. Why write access? why no read? is this some kind of weird use of a temp file?

    Read the article

  • vim: remove previous code indentation and convert to another

    - by ramgorur
    I have a c project with multiple files (more than 100), the codes are written in Whitesmiths style, but I want to change them into K&R style indentation. Is it possible to do using vim in an automated way ? For example I have a emacs-lisp script to achieve this -- (progn (find-file "{}") (mark-whole-buffer) (setq indent-tabs-mode nil) (untabify (point-min) (point-max)) (indent-region (point-min) (point-max) nil) (save-buffer)) I was wondering if there is a similar trick that could be done with vim.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • VFS and FS i-node difference

    - by gaffcz
    What is the difference between VFS i-node and FS (e.g. EXT) i-node? Is it possible that EXT i-node is persistent (contains/points to data blocks), but VFS i-node is created just in i-node cache after read/use of EXT i-node? Or the VFS i-node is just an image of FS i-node (it's the same) and i-nodes in those systems, which are not working with i-nodes (e.g. FAT, NTFS) has to be emulated (HOW?) to allow VFS work with those FS like they would support i-nodes?

    Read the article

  • is there a way to prevent network manager from storing the password for a wireless network

    - by tolomea
    Our corporate wireless network uses continuously changing passwords with RSA tokens. So every time we need to connect to the wireless we need to enter a new password off the RSA token. For extra fun using the wrong password a couple of times in a row causes the users account to be locked. Network manager automatically stores and reuses the password, with the net result that it is constant getting my account locked. Is there some way to prevent it from storing my password for that network? Or perhaps someway to get the gnome keyring to not store it?

    Read the article

  • Ubunt doesn't mount one of my NTFS disks

    - by Jader Dias
    There is a mountable /dev/sda NTFS formatted (Windows disk) There is no /dev/sdb when I ls /dev (NTFS Data disk) There is a /dev/sdc which is another disk of the same model, (Ubuntu disk) I can see that Ubuntu detected this unmountable disk in the Disk Utility It states incorrectly it is unpartioned and a RAID volume. (it previously was RAID0 setup with /dev/sdc but now it is a simple volume, no RAID whatsoever) When I boot Windows 7, it uses this unmountable disk without a glitch The problem happens in both IDE and AHCI modes Ubuntu 10.04 Lucid Lynx

    Read the article

  • DHCP server inside a virtual machine can't see other machines

    - by William
    Hi, I setup a private network from virtual machines and one of the machines is the DHCP server for the group. I want to specify a next-server for the DHCP server but I'm having trouble connecting to any of the machines that I lease IPs to. I'm just trying to do a simple ping/ssh to 10.0.0.252 (a machine with a lease) but it doesn't seem to respond. Any advice? I'm assuming I need to be able to connect to my next-server but maybe I'm wrong. Thanks.

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • How do I allow users to execute commands via ssh without allocating a pseudo-terminal

    - by Dani El
    I need to allow users to run a limited set of commands. But not to allow them to create interactive sessions. Just like GitHub does. If you try to ssh without a command it greetings you and close the session. I can acquire this by using ForceCommand some-script But getting in some-script i then need to eval user's input. Perhaps any other NoTTY-like option in sshd_config? --- UPDATE --- i'm looking for a pure SSH / Bash solution, not Perl/Python/etc. hacks.

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • Switch between network configurations via command line in fedora 17

    - by Mike Fairhurst
    I have two different setups I use on my work laptop; one enables synergy over an ethernet ssh tunnel with my work computer on the local network, and the other opens an HTTP tunnel to my work computer from outside the network. When I have wifi enabled at work, my laptop seems to use it by preference. This makes synergy run incredibly slowly. At home I must use wifi. I have scripts that begin my ssh tunnels, add my ssh keys, and starts up other programs like synergy, and close themselves when I shut my laptop. However, every day I have to start out my routine by opening my gnome-control-center and turning on my ethernet. I have tried route add and ifup, none of it works, so I dove into gnome-control-center's source code and found that it enabled the connection by libnm's method nm_client_activate_connection with some libnm specific structs that I am having trouble tracking down. I'm not much of a c programmer, and I'm not familiar with either GTK or libnm. Does anybody know what fedora 17 does with ethernet connections to fully enable them? Or does anybody know what libnm does to fully enable an ethernet connection? Do I have to write a c script to run libnm for me to fully emulate whatever gnome-control-center is trying to do?

    Read the article

  • Error when sending mail to an external mail server from Postfix on CentOS

    - by yankitwizzy
    I just installed Postfix. i have not yet done any configuration on it. Each time I try to use it to sendmail from another application, it keeps telling me that COnnection was refused from the ip I want to connect to. This is the error I get [root@localhost /]# telnet mail.abuse.org Trying 69.43.160.153... telnet: connect to address 69.43.160.153: Connection refused telnet: Unable to connect to remote host: Connection refused COuld someone please help me the problem

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >