Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 396/991 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Where does netstat get the process name?

    - by tjameson
    I am developing a node application and there is an option to set the process title (process name). This only sets it in some tools (like ps and top), but not in htop or netstat. I found this article that explained how most applications do it, but it doesn't change in netstat. That lead me to wonder where those programs are getting the process name. Would they be getting it from /proc/##/cmdline? (## being the PID of the process) I figure messing with things in /proc is a bad idea (and probably not possible), so if this is where those programs are getting it, is there a way to change it?

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • How can I avoid SSH's host verification for known hosts?

    - by shantanuo
    I get the following prompt everytime I try to connect a server using SSH. I type "yes", but is there a way to aovid this? The authenticity of host '111.222.333.444 (111.222.333.444)' can't be established. RSA key fingerprint is f3:cf:58:ae:71:0b:c8:04:6f:34:a3:b2:e4:1e:0c:8b. Are you sure you want to continue connecting (yes/no)?

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • Can a named (bind) crash make a server unreachable?

    - by giorgio79
    My server recently became unreachable, and after restart a named error was the last line I found in /var/log/messages before restart: Jun 26 00:15:06 host named[1303]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:71::29#53 Jun 26 06:38:55 host kernel: imklog 5.8.10, log source = /proc/kmsg started. Jun 26 06:38:55 host rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1294" x-info="http://www.rsyslog.com"] start Jun 26 06:38:55 host kernel: Initializing cgroup subsys cpuset Can a named crash make a server unreachable? I doubt it, as I assume I should still be able to login with ssh via IP, but the server did not respond...So, I am trying to make heavy guesses here.

    Read the article

  • How to jump to a particular flag in a Unix manpage?

    - by dotancohen
    When reading a Unix manpage in the terminal, how can I jump easily to the description of a particular flag? For instance, I need to know the meaning of the -o flag for mount. I run man mount and want to jump to the place where -o is described. Currently, I search /-o however that option is mentioned in several places before the section that actually describes it, so I must jump around quite a bit. Thanks.

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • zlib/libxml2 duplicate package?

    - by Fusion
    I've been updating my amazon ec2 micro instance every month till now. when i try to "yum update" i receive this error : zlib-1.2.5-7.11.amzn1.x86_64 has installed conflicts libxml2 < ('0', '2.7.7', None): libxml2-2.7.6-4.12.amzn1.x86_64 zlib-1.2.5-7.11.amzn1.x86_64 is a duplicate with zlib-1.2.3-27.9.amzn1.x86_64 yum update output: http://pastebin.com/Dfq0yphN I've tried to update separately zlib and libxml2 zlib: same "duplicate" error. libxml2: Transaction Check Error: package libxml2-2.7.8-10.24.amzn1.x86_64 is already installed what can i do?

    Read the article

  • Apache can't get viewed from outside of my LAN

    - by Javier Martinez
    I fixed it in PORTS TRIGGER menu of my router. Thanks you anyway I have a weird problem related with (i think) my cable-router and my configured vhosts in Apache2. The point is I can't access from outside of my LAN to any of my configured vhosts if I set the http port of Apache to 80 and i add a NAT rule for it. Otherwise, if I set my Apache port to 81 (or any else) with its respective NAT rule on my router it works. My router is an ARRIS TG952S and I am using Apache/2.2.22 (Debian) ports.conf NameVirtualHost *:80 Listen 80 vhost1.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost1.mydomain.net ServerAlias vhost1.mydomain.net www.vhost1.mydomain.net vhost2.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost2.mydomain.net ServerAlias vhost2.mydomain.net www.vhost2.mydomain.net DNS records (using FreeDNS) are: mydomain.net --> pointing to another server vhost1.mydomain.net --> pointing to my server vhost2.mydomain.net --> pointing to my server iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-apache-noscript tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-apache tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-apache (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-apache-noscript (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Thanks you

    Read the article

  • How do I limit concurrent sftp / port forwarding logins

    - by Kyoku
    I have ssh set up so my users can only access sftp and port forwarding, how can I limit the number of concurrent logins on a per user basis? In my sshd_config I have UsePAM set to yes and in /etc/security/limits.conf I have: username - maxlogins 1 I also tried: username hard maxlogins 1 Neither of these works and the users can still log in multiple times.

    Read the article

  • How long does badblocks take on a 1TB drive?

    - by Steven Don
    I'm running badblocks (or rather "e2fsck -c") on a 1TB drive and if the progress indicator is any indication (no pun intended), it's going to take almost forever to complete. Right now it says 0.01% done, 30:20 elapsed which would mean the thing would take 17 weeks or so to complete, which seems rather excessive in my book. Is that a normal amount of time for such a check to take or it simply that my suspicions are correct in that the drive is failing, thus causing the check to take only slightly shorter than eternity? I found this question here, but that pertains to the amount of passes done.

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • How to exit vncconfig process properly

    - by Stan
    On CentOS 5.7, when open up a vncconfig, a tray icon shows on the taskbar. Hit close or the x button to close the vncconfig doesn't exit the vncconfig, the tray icon is still there. Next time, if I open up another vncconfig rather than clicking the one on taskbar. There will be another vncconfig tray icon added to the taskbar. Except using kill $pid to remove the tray icon, is there any other way to exit the vncconfig process properly? See screenshot below:

    Read the article

  • Is there a way to make nautilus display the "recently used" files and directories?

    - by Peltier
    Is there a way to make nautilus display the "recently used" files and directories, just like the "open file" dialog does? Just to make my question clearer, here are two screenshots: The GTK open file dialog, showing the recently used items: A nautilus window, which doesn't offer to display recently used items: EDIT : This has been added as a feature request to Nautilus. Don't hesitate to make your voice heard if you want it to happen!

    Read the article

  • Iptables NAT logging

    - by Gerard
    I have a box setup as a router using Iptables (masquerade), logging all network traffic. The problem: Connections from LAN IPs to WAN show fine, i.e. SRC=192.168.32.10 - DST=60.242.67.190 but for traffic coming from WAN to LAN it will show the WAN IP as the source, but the routers IP as the destination, then the router - LAN IP. I.e. SRC=60.242.67.190 - DST=192.168.32.199 SRC=192.168.32.199(router) - DST=192.168.32.10 How do I configure it so that it logs the conversations correctly? SRC=192.168.32.10 - DST=60.242.67.190 SRC=60.242.67.190 DST=192.168.32.10 Any help appreciated, cheers

    Read the article

  • Change permission to /proc/net/ip_conntrack on Ubuntu server 9.10

    - by bjarkef
    Hi I have a script that needs to extract certain information form the /proc/net/ip_conntrack file once in a while. I do not wish to run this script as the root user. Default permissions for the file is: $ ls -lah /proc/net/ip_conntrack -r--r----- 1 root root 0 2010-03-28 12:18 /proc/net/ip_conntrack I can change it with: sudo chmod o+r /proc/net/ip_conntrack But that does not stick after a reboot. Is there some configuration file for file-permissions in the /proc directory in Ubuntu Server 9.10? Or do I just have to stick a chmod line in some startup script?

    Read the article

  • How can one associate a 3ware controller with the corresponding /dev/tw?? device?

    - by barbaz
    I have a few 3ware RAID controllers installed in a system. Is there any way to figure out the mapping between the following identifiers, each describing in a way the very same RAID controller? The tw_cli reported controller id (e.g. c0,c1,c2,...) The corresponding device nodes that allow smartctl access via the 3ware driver (e.g. /dev/twa0, /dev/twa1, /dev/twl0) The block device presented to the system representing a RAID unit (/dev/sda, /dev/sdb,...)

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • installing latest apache on centos

    - by fivelitresofsoda
    hi, I'm trying to install the newest version of apache on my centos server. I did the following: Download $ wget http://httpd.apache.org/path/to/latest/version/ Extract $ gzip -d httpd-2_0_NN.tar.gz $ tar xvf httpd-2_0_NN.tar Configure $ ./configure Compile $ make Install $ make install Test $ PREFIX/bin/apachectl start And that all worked except the last step, when i type apachectl start it says 'command not found'. I ran this command from /usr/local/apache2/bin/ where it is installed but no cigar. Any idea what i am doing wrong? Thanks.

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • Different graphic cards drivers while booting from external media

    - by goran
    I am booting a certain system of mine with ubuntu 9.10 from external HDD. I am satisfied with the setup and it works fine, however I would like to modify it so that I can choose which graphic card drivers to load during the boot time. Specifically I would like to choose between: nvidia proprietary driver ati proprietary driver generic driver Currently if I am using proprietary drivers then dont boot into X, delete xorg.conf, start gdm and reconfigure the system using jockey (for hardware drivers). What would be the steps to make this (semi-)automatic and avoid restarting X?

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • SFTP through proxy

    - by aerodynamic_props
    I have a large amount of data on scratch space at computer b that I want to get. In my network I cannot directly connect to computer b (ssh exits with "No route to host"); I must first connect to computer a, and then connect to computer b. I cannot move the data from the scratch space on computer b to computer a because of a disk quota that is imposed on me at computer a. How can I move the data from computer b to my computer in this situation?

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >