Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 409/998 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • After installing Ubuntu how do I get rid of unity and go back to gnome?

    - by aseq
    After I have installed the newest Ubuntu LTS release (12.04, still in beta though) I am greeted with an unfamiliar and difficult to use desktop environment. I believe it is called unity. However I have used gnome for a decade and a half and I would not like to move to this new and (for me) unusable desktop environment. What is a quick and easy way to remove (most) of unity and bring back gnome, as well as configure my display manager to load gnome by default with the environment as close as possible to the way it was before?

    Read the article

  • Zabbix not getting data for one filesystem

    - by Dennis Williamson
    I have Zabbix monitoring disk space for several volumes on several servers. It works fine on all of them except for one of the volumes on one of the servers which always reports as 0. However, when I run ./zabbix_get -s localhost -p 10050 -k 'vfs.fs.size[/home, free]' locally on the machine in question, it gives me the correct, non-zero size which matches the output of df. How can I go about troubleshooting and correcting this problem?

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Video Player/Library for Ubuntu with ratings and thumbs

    - by greggannicott
    I've just made the switch to Ubuntu on my main PC and I've been looking for a media player that can: Play all the usual video formats Rate (and ideally, tag) each file Display thumbnails for each file Other than that there isn't much I'm after. Banshee comes close, but doesn't display thumbnails. I've Google'd lots but I'm running out of search terms to try. Does anyone have any suggestions? Cheers!

    Read the article

  • Send command through PuTTY automatic login

    - by Arthur
    I am using the following to login automatically to a remote server and then run commands listed in a commands.txt, like this: C:\path to\putty.exe -ssh adreese.ip -l user -pw Password -m C:\Path to\command.txt commands.txt contains the following: wakeonlan -i broadcast adress Macadress However, when I try to do so a new window for PuTTY appears, but it closes and exits instantly after login. As a result, I cannot see the output of the command(s). After a several tests, it appears that the command is not execute , cause my computer doesn't "wake on lan". I don't understand what's going on here ? I cannot use the plink.exe program cause I cannot make connection with public key ( too much distant site for doing all the registration keys in putty ) Can someone help me with this ? Or can i use another program to make ssh connection and send command with script from a windows os? Edit : I also try to make a bash file in the distant server with the same command and execute it from the session like this : C:\path to\putty.exe -ssh adreese.ip -l user -pw Password \home\user\script.sh Ihave the same problem... Need help please : /

    Read the article

  • How to set which IP to use for a HTTP request?

    - by GetFree
    This is probably a silly question. I'm doing some http requests using wget from the command line, and I want those connections to be made through one specific IP of the 4 IPs my server has. Those http requests go to one specific range of IPs so I only want those to be routed differently. The 4 interfaces in my server are eth0, eth0:0, eth0:1, eth0:2. I tried with the following command: route add -net 192.164.10.0/24 dev eth0:0 But when I see the routing table it says: Destination Gateway Genmask Flags MSS Window irtt Iface 192.164.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 The interface is set to eth0 not eth0:0 as my command says. What am I doing wrong?

    Read the article

  • Galera install failure on Fedora 18

    - by ehime
    I've been trying to reinstall MariaDB and have been encountering multiple issues, $ yum install Mariadb-Galera-server Error: Package: MariaDB-Galera-server-5.5.29-1.i386 (mariadb) Requires: galera Available: galera-23.2.4-1.rhel5.i386 (mariadb) galera galera = 23.2.4-1.rhel5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest there is a requirement that libssl.so.6 and libcrypto.ssl.6 are installed, these DO show up in my /lib64 and /lib though as linked items. /usr/lib -rwxr-xr-x 1 root root 1356700 Nov 23 2010 libcrypto.so.0.9.8e lrwxrwxrwx 1 root root 19 Jun 28 12:03 libcrypto.so.6 -> libcrypto.so.0.9.8e -rwxr-xr-x. 1 root root 394272 Mar 18 14:22 libssl.so.1.0.1e lrwxrwxrwx 1 root root 16 Jun 28 12:03 libssl.so.6 -> libssl.so.0.9.8e /usr/lib64 -rwxr-xr-x 1 root root 1849680 Mar 18 14:21 libcrypto.so.1.0.1e lrwxrwxrwx 1 root root 26 Jun 28 11:54 libcrypto.so.6 -> /lib64/libcrypto.so.1.0.1e -rwxr-xr-x 1 root root 421712 Mar 18 14:21 libssl.so.1.0.1e lrwxrwxrwx 1 root root 23 Jun 28 11:54 libssl.so.6 -> /lib64/libssl.so.1.0.1e So the deps SHOULD be met, trying to $ yum install galera returns this Resolving Dependencies --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Finished Dependency Resolution No errors? but no install either .... ? lets try wget and rpm'ing the package instead I guess? $ wget https://launchpad.net/galera/2.x/23.2.4/+download/galera-23.2.4-1.rhel5.x86_64.rpm $ rpm -ivh galera-23.2.4-1.rhel5.x86_64.rpm This issues the dreaded error: Failed dependencies: libcrypto.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 libssl.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 But we saw above these packages are here =( Whats going on?? Is openssl not installed? $ yum install openssl Loaded plugins: langpacks, presto, refresh-packagekit Package 1:openssl-1.0.1e-4.fc18.x86_64 already installed and latest version Nothing to do Its there.... ??? wth Fedora?

    Read the article

  • Hung Java JVM failing to respond to kill -3

    - by Hans
    I have a Java VM that is hanging "randomly". I quote the randomly bit, because there is obviously a reason that the VM is hanging, but the hang does not occur periodically. We have the same software running in different customer environments and in those environments the JVM is not hanging. In the process of attempting to troubleshoot the hang the process exists with zero CPU utilization. I then attempt to execute kill -3 and the kill command hangs. No JVM Thread Dump is produced. I have spent time instrumenting the code to periodically log the thread stack traces hoping to catch the JVM in a state that would indicate where the issue lies, but so far this attempt has not born much fruit. Unfortunately I have not been able to reproduce this issue in my lab environment so I am limited by what can be done at the Customer site. The OS's in question are Red Hat Enterprise 5.4 and SUSE 10 running java version 1.6.0_05-b13 Has anyone had this problem? Any ideas on why kill -3 is failing to produce a Java Thread Dump? Thanks!

    Read the article

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • Resolve `strace` file number to a filename

    - by Mike Pennington
    I am debugging a problem where MoinMoin on CentOS is throwing a permissions error, but I can't track down where the problematic file / directory is. I ran strace -vp <pid> on the apache pid; when I have the problem I see this: epoll_wait(10, {{EPOLLIN, {u32=3487534344, u64=140367313734920}}}, 2, 10000) = 1 accept4(6, {sa_family=AF_INET6, sin6_port=htons(52621), inet_pton (AF_INET6, "::ffff:105.193.30.91", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28], SOCK_CLOEXEC) = 11 ## Later on... read(7, 0x7fffa658ad7f, 1) = -1 EAGAIN (Resource temporarily unavailable) However, since apache is already running, I see no corresponding open() on the file referred to as 7; thus I see the permissions problem, but I still don't know which file is the problem. I know I could try to catch all the file opens when I respawn apache, but I'm hoping there is a way to map file 7 to a real filename... is there a way to do this?

    Read the article

  • Tar dereference only 1 level

    - by Bart van Heukelom
    I use the following pseudo-script to create a TAR of my installed software mkdir tmp ln -s /path/to/app1/bin tmp/app1 ln -s /and/path/going/to/the-app-2 tmp/app2 tar -c --dereference -f apps.tar tmp I need the --dereference option here to follow the links I just made in tmp. The reason I make the links in the first place is to store the directories with a different name in the archive than they have on the filesystem. Until now it has worked fine. However, I now have the situation that /path/to/app1 also contains links, and those I don't want to follow. Is this possible with some changes to the tar command? Or do I need to completely switch around the way I build the archive?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • VSFTPD: Cannot figure this thing out...

    - by A Wizard Did It
    Alright, I've been giving this the best that I can, reading through various tutorials on google, but I cannot seem to get vsftpd running the way I want. For a short while I had it working with one account, but then that stopped and I haven't been able to get it to work since. I've since reformated and reinstall Ubuntu 10.04 LTS. I used apt-get install vsftpd and that's where I am now... I'd really appreciate if anyone could help me understand exactly how this is supposed to work... How do I add FTP accounts and set their home directory to something like /var/www/public_html?

    Read the article

  • Does changing web hosts (changing a domain's nameservers) affect the private nameservers / glue records created under that domain?

    - by Kris
    We currently have a virtual dedicated server with GoDaddy and have 4 domains under it. I ended up creating private nameservers under, say mydomain_a.com, and have ns1.mydomain_a.com and ns2.mydomain_a.com as the nameservers for the other 3 domains. Now, we're thinking of switching web hosts (not domain registrar just the host) which means I have to change mydomain_a.com's nameservers to the new host. Will that affect or mess with the other 3 domains still pointing to ns1.mydomain_a.com and ns2.mydomain_a.com? Will that affect the private nameservers / glue records in anyway? Currently: domain: mydomain_a.com nameservers (GoDaddy): ns1.mydomain_a.com ns2.mydomain_a.com domain: mydomain_b.com nameservers (GoDaddy): ns1.mydomain_a.com ns2.mydomain_a.com After the Change: domain: mydomain_a.com nameservers (Other Host): ns1.some_other_host_ns.com ns2.some_other_host_ns.com This is my Question, Would this be affected? domain: mydomain_b.com nameservers (GoDaddy): ns1.mydomain_a.com ns2.mydomain_a.com

    Read the article

  • How to keep source frame rate with mencoder/ffmpeg?

    - by Sandra
    I would like to crop and rotate a video, and then encode it to mp4 or mkv. mencoder video.mp4 -vf rotate=1,crop=720:1280:0:0 -oac pcm -ovc x264 -x264encopts preset=veryslow:tune=film:crf=15:frameref=15:fast_pskip=0:threads=auto -lavfopts format=matroska -o test.mkv But when I do the above encoding, the frame rate is way too fast. The encoding options were something I found, so I don't know if that is the problem. Question All I want is to crop and rotate the video, and keep the audio/video quality as good as possible. Have anyone tried this?

    Read the article

  • winbind not working

    - by Yon
    I'm trying to set up winbind with an Active Directory running on Win2003. This works: net rpc user -S SOMEDOMAIN -U Administrator Password: Administrator ASPNET Demo Guest IUSR_SERVER20 IWAM_SERVER20 krbtgt RemoteUser SUPPORT_388945a0 This does not: wbinfo -u Error looking up domain users From the winbindd log: [2012/05/31 16:45:38, 1] nsswitch/winbindd_ads.c:ads_cached_connection(128) ads_connect for domain SOMEDOMAIN failed: Operations error [2012/05/31 16:46:38, 1] nsswitch/winbindd_util.c:trustdom_recv(230) Could not receive trustdoms ADS is not working with this domain. Why is winbind trying to use it instead of RPC? How can I force it to use only RPC and for all of this to work?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >