Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 89/503 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • security issue of Linux sudo command?

    - by George2
    Hello everyone, 1. I am using Red Hat Enterprise 5 Linux box. I find if a user is in /etc/sudoers file, then if the user run command with sudo, the user will run this command with root privilege (without knowing root password, the user runs sudo only need to input the user's own password in order to run a command with sudo). Is that correct understanding? 2. If yes, then is it a security hole? Since users other than root could run with root privilege? thanks in advance, George

    Read the article

  • Installing Ruby on Rails on Ubuntu 10.04: A Living Nightmare

    - by emptyset
    Update #3: Starting over from scratch, shortened this post, decided to re-install a clean copy of Ubuntu 10.04 on a VM and go through the walk-through again. So, all the steps go without a hitch. As root: root@ubuntu:~/rubygems-1.3.7# ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] root@ubuntu:~/rubygems-1.3.7# gem -v 1.3.7 root@ubuntu:~/rubygems-1.3.7# rails -v Rails 2.3.8 Now, as myself (in a separate term): emptyset@ubuntu:~$ ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] emptyset@ubuntu:~$ gem -v /usr/local/lib/site_ruby/1.8/rubygems.rb:10:in `require': no such file to load -- rubygems/defaults (LoadError) from /usr/local/lib/site_ruby/1.8/rubygems.rb:10 from /usr/local/bin/gem:8:in `require' from /usr/local/bin/gem:8 emptyset@ubuntu:~$ rails -v bash: /usr/bin/rails: Permission denied So, this appears to be a permissions issue, but I don't understand why. Specifically, if I have to start making things go+rx all over the place, I really need to understand which specific files need the permissions change.

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • Import a bunch of certificates into the correct certificate store using a script

    - by Jesse Weigert
    I have a collection of certificates in a p7b file, and I would like to automatically import each certificate into the correct store depending on the certificate template. What is the best way to do this with a script? I tried using certutil -addstore root Certificate.p7b, and that will correctly place all of the root CAs into the root store, but it returns an error if it encounters any other type of certificate. I'm willing to use batch scripts, vbscript or powershell to accomplish this task. Thanks!

    Read the article

  • Free space on Dedi' in CentOS

    - by Trance84
    It will sound stupid but i need to figure out how much disk space i have in my dedicated server, it runs CentOS6...the last command i issued was this [root@ks34900 ~]# df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.7G 6.4G 2.9G 69% / /dev/root 9.7G 6.4G 2.9G 69% / none 1000M 288K 1000M 1% /dev /dev/sda2 914G 200M 868G 1% /home But again, stupid as it may sound... i cant figure out how much space i have in "/" folder (root) And is it possible that "/usr" have a different space (partition)?

    Read the article

  • sudo/su command for Red Hat Server 5.4

    - by rednaxela
    Without going into too much detail, I need to execute one linux command on redhat with root user access. Red Hat Server 5.4 does not recognise the sudo command. The command su can be used to switch to the root user on redhat, but su cannot be done in one line. For example the command: su ; cd opt/storage/RootAccessFolder will not work because this only switches you to root, then executes the cd command once you have logged out from the root user. I guess what i'm looking for is like a.. sudo cd opt/storage/RootAccessFolder but I say again, sudo doesn't work. Any ideas?

    Read the article

  • Graphics driver for ubuntu on dell latitude XT

    - by marc.riera
    we have a laptop (dell latitude xt) on our company, and we would like to install ubuntu on it. windows 7 works fine out of the box, so the hardware is fine. since this laptop has a touchscreen we just installed ubuntu 10.10 netbook edition 32x. But, we do not manage to enable the touchscreen, neither the vga graphic drivers. this is the output from lspci, if somebody cares. 00:00.0 Host bridge: ATI Technologies Inc Radeon Xpress 7930 Host Bridge 00:01.0 PCI bridge: ATI Technologies Inc RS7932 PCI Bridge 00:04.0 PCI bridge: ATI Technologies Inc Device 7934 00:06.0 PCI bridge: ATI Technologies Inc RS7936 PCI Bridge 00:07.0 PCI bridge: ATI Technologies Inc Device 7937 00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0) 00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1) 00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2) 00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3) 00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4) 00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI) 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14) 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon Xpress 1250 03:01.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 03:01.1 FireWire (IEEE 1394): Texas Instruments PCIxx12 OHCI Compliant IEEE 1394 Host Controller 03:01.3 SD Host controller: Texas Instruments PCIxx12 SDA Standard Compliant SD Host Controller 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0b:00.0 Network controller: Broadcom Corporation BCM4321 802.11a/b/g/n (rev 03) I've tryied to install ati drivers 9.3 , which I downloaded and installed, unpacked and installed, builded and installed, but nothing worked. Looks like the latests version is just accepted to work on jaunty 9.04, so they are kind of old. what else I can do? thanks. Marc Information added: lsusb and lspci -n |grep 01:05.0 sysop@wl083517:~$ lspci -n |grep 01:05.0 01:05.0 0300: 1002:7942 sysop@wl083517:~$ lsusb Bus 006 Device 002: ID 413c:8138 Dell Computer Corp. Wireless 5520 Voda I Mobile Broadband (3G HSDPA) Minicard EAP-SIM Port Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 413c:8140 Dell Computer Corp. Wireless 360 Bluetooth Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 1b96:0001 N-Trig Duosense Transparent Electromagnetic Digitizer Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 03f0:1807 Hewlett-Packard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sysop@wl083517:~$

    Read the article

  • Nginx - Enable PHP for all hosts

    - by F21
    I am currently testing out nginx and have set up some virtual hosts by putting configurations for each virtual host in its own file in a folder called sites-enabled. I then ask nginx to load all those config files using: include C:/nginx/sites-enabled/*.conf; This is my current config: http { server_names_hash_bucket_size 64; include mime.types; include C:/nginx/sites-enabled/*.conf; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; root C:/www-root; #charset koi8-r; #access_log logs/host.access.log main; location / { index index.html index.htm index.php; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server{ server_name localhost; } } And this is one of the configs for a virtual host: server { server_name testsubdomain.testdomain.com root C:/www-root/testsubdomain.testdomain.com; } The problem is that for testsubdomain.testdomain.com, I cannot get php scripts to run unless I have defined a location block with fastcgi parameters for it. What I would like to do is to be able to enable PHP for all hosted sites on this server (without having to add a PHP location block with fastcgi parameters) for maintainability. This is so that if I need to change any fastcgi values for PHP, I can just change it in 1 location. Is this something that's possible for nginx? If so, how can this be done?

    Read the article

  • keyboard mappings are totally screwed after updating to kde4

    - by zeonglow
    I recently upgraded from KDE 3.5 to KDE 4, and I have been having weird issues with my keyboard. In one of the virtual consoles e.g. when I press ctrl + alt 1 , I can type perfectly, but in KDE, several of the number keys don't work, the left and right arrows don't work either. When I press the right arrow key in xev I get this: KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 903459, (111,55), root:(115,836), state 0x10, keycode 114 (keysym 0x1008ff11, XF86AudioLowerVolume), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False When I press the '3' key it toggles my Bookmarks toolbar in Firefox, in xev I get this: KeyPress event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 999968, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 1000032, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False As this is deeper down, changing the type of keyboard in the KDE meun's has no effect. I'm slowly beginning to wade through the mountains of documentation about the X keyboard model, but there has to be a better way. Does anyone no what it is? Edit: 1234567890 ! after deleting the entire .kde folder. but only until I change the Keyboard settings from the "system settings" applet, then its hosed full time. Regardless of what I set the settings too. (restore to default settings doesn't) 2nd Edit: I'm using Gentoo AMD64, I was upgrading from KDE 3.5 KDE 4.2. I think I had manual settings before, although I didn't change anything. I was originally running KDE without HAL until that stop working a year or so ago. The only customisation I made was to set the multimedia keys to control Amarok. 3rd Edit $ grep xkb /var/log/Xorg.0.log (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" Xorg.0.log has this to say: (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. (WW) Disabling Mouse1 (WW) Disabling Keyboard1 My Xorg.conf has this in it. Identifier "Keyboard1" Driver "kbd" Option "AutoRepeat" "500 30" # Specify which keyboard LEDs can be user-controlled (eg, with xset(1)) Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "gb"

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • /etc/inputrc does not seem to be recognized as user on Ubuntu 8.04.2 LTS

    - by Brian Hogg
    On a new installation of Ubuntu 8.04.2 LTS, logging in as a standard user does not maintain the keybindings (whether through sudo su - or direct from ssh). As the root user everything is fine and /root/.inputrc does not exist (only /etc/inputrc) which has its default settings. In addition setting a ~/.bashrc and ~/.profile to the same as the root user (and chown'ing to user:user) has no effect. Am I missing something here?

    Read the article

  • NFS headaches with FreeBSD 4.9

    - by Ernie
    Once upon a time, this used to work, and I kept the configuration the same, but... now nothing. I'm just trying to get an NFS server set up on a FreeBSD 4.9 server. The process should be about as complicated as this: Add this entry to /etc/exports: /var/home /var/vpopmail/domains -maproot=root XXX.XX.XX.XXX Execute this: portmap nfsd -u -t -n 4 mountd -r Then this should work, regardless of network and firewall issues: showmount -e localhost But showmount -e localhost fails with the following error: RPC: Port mapper failure showmount: can't do exports rpc And even if I kill off the NFS daemon, and try a rpcinfo -p localhost, I get this error: rpcinfo: can't contact portmapper: rpcinfo: RPC: Unable to receive; errno = Connection reset by peer The portmapper is still running. Why the heck does nothing work as if it isn't? Edit to add: FYI: Sockstat gives me this: $ sockstat |egrep "(nfsd|portmap)" root nfsd 86310 3 udp4 *:2049 *:* root nfsd 86310 4 udp4 *:973 *:* root portmap 45920 0 tcp4 *:111 *:* Then, at a later time (say, 5 minutes) it's as if nfsd isn't acting as a server: $ sockstat |egrep "(nfsd|portmap)" root portmap 45920 0 tcp4 *:111 *:* But the nfs daemon is still running: $ ps ax |grep nfsd 86311 ?? I 0:00.00 nfsd: server (nfsd) 86312 ?? I 0:00.00 nfsd: server (nfsd) 86313 ?? I 0:00.00 nfsd: server (nfsd) 86314 ?? I 0:00.00 nfsd: server (nfsd)

    Read the article

  • Log of cron actions on OS X

    - by Doug Harris
    Does the cron which comes with OS X log its actions anywhere? I'm not looking for output of any particular cron job, but rather log of what cron is doing. On a couple linux machines I've checked, there's /var/log/cron which has contents like: Apr 26 11:00:01 localhost crond[27755]: (root) CMD (/root/bin/mysql-backup) Apr 26 11:01:01 localhost crond[27892]: (root) CMD (run-parts /etc/cron.hourly) Apr 26 11:07:01 localhost crond[28138]: (root) CMD (/usr/local/bin/python /home/ user1/scripts/pythonscript.py) Apr 26 11:18:18 localhost crontab[28921]: (user2) LIST (user2) Apr 26 11:18:22 localhost crontab[28929]: (user2) BEGIN EDIT (user2) Apr 26 11:18:59 localhost crontab[28929]: (user2) REPLACE (user2) This shows when jobs ran, when users viewed or edited crontabs, etc. This stuff is nowhere that I've found on my Snow Leopard machine.

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

  • public key always asking for password and keyphrase

    - by Andrew Atkinson
    I am trying to SSH from a NAS to a webserver using a public key. NAS user is 'root' and webserver user is 'backup' I have all permissions set correctly and when I debug the SSH connection I get: (last little bit of the debug) debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering DSA public key: /root/.ssh/id_dsa.pub debug1: Server accepts key: pkalg ssh-dss blen 433 debug1: key_parse_private_pem: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/root/.ssh/id_dsa.pub': I am using the command: ssh -v -i /root/.ssh/id_dsa.pub [email protected] The fact that it is asking for a passphrase is a good sign surely, but I do not want it to prompt for this or a password (which comes afterwards if I press 'return' on the passphrase)

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • "sed" regex help: Replacing characters

    - by powerbar
    I want to change characters in a XML file by using sed. The input looks like this: <!-- Input --> <root> <tree foo="abcd" bar="abccdcd" /> <dontTouch foo="asd" bar="abc" /> </root> Now I want to change all c to X in the bar tag of the tree element. <!-- Output --> <root> <tree foo="abcd" bar="abXXdXd" /> <dontTouch foo="asd" bar="abc" /> </root> How is the correct sed command? Please consider, there can be more than one occurence of c (next to each other or not) in one tag... I tried this myself, but it won't change multiple c, and it does append a X :( sed -i 's/\(<tree.*bar=\".*\)c\(.*\"\/>\)/\1X\2/g' Input.xml

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • List of common ways those could shut down server unexpectedly ?

    - by SpawnCxy
    After running a bash fork bomb which made my webserver down, I think I should be more careful even not under root.I thought it would be totally fine while I'm not under root.So I ignored the warning and ran the bash fork bomb which is :() { :|:& }; : .(Please don't run it if u don't understand this code cuz it will make you system down).And I think I need a list of common ways those could cause a sever shutting down unexpectly even not under root. Any suggestion would be appreciated. Regards `

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • PowerDNS CNAME with multiple A records produces unexpected results

    - by bwight
    This problem from what i can tell is isolated to PowerDNS. The servers are running two packages pdns-static-3.0.1-1.i386.rpm and pdns-recursor-3.3-1.i386.rpm on the most recent version of Amazon Linux. The amazon ec2 loadbalancers are assigned a CNAME with multiple hosts. Below is an example of the actual behavior. Notice how the hosts are always in the same order. [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb Expected behavior is round robin for the hosts [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa [root@localhost ~]# host cache.domain.com cache.domain.com is an alias for xxxxx.us-east-1.elb.amazonaws.com. xxxxx.us-east-1.elb.amazonaws.com has address aaa.aaa.aaa.aaa xxxxx.us-east-1.elb.amazonaws.com has address bbb.bbb.bbb.bbb The addresses eventually do swap but it seems to be on a 30 minute cache timer changing the TTL of the record doesn't appear to affect anything. It appears as though the resolver has a cache of the response. This adversely affects my application because all of the load is only being sent to one of the loadbalancers (Availability Zones) so if I have servers in two zones then only one zone is under load at a time. Do you know how I can fix this so that each time the host is resolved the order of the addresses is alternating.

    Read the article

  • Ports do not open after rules appended in iptables

    - by user2699451
    I have a server that I am trying to setup for OpenVPN. I have followed all the steps, but I see that when I try to connect to it in Windows, it doesn't allow me, it just hangs on connecting, so I did a nmap scan and I see that port 1194 is not open so naturally I append the rule to open 1194 with: iptables -A INPUT -i eth0 -p tcp --dport 1194 -j ACCEPT followed by service iptables save and service iptables restart which all executed successfully. Then I try again, but it doesn't work and another nmap scan says that port 1194 is closed. Here is the iptables configuration: # Generated by iptables-save v1.4.7 on Thu Oct 31 09:47:38 2013 *nat :PREROUTING ACCEPT [27410:3091993] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [5042:376160] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -j SNAT --to-source 41.185.26.238 -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE COMMIT # Completed on Thu Oct 31 09:47:38 2013 # Generated by iptables-save v1.4.7 on Thu Oct 31 09:47:38 2013 *filter :INPUT ACCEPT [23571:2869068] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [27558:3656524] :vl - [0:0] -A INPUT -p tcp -m tcp --dport 5252 -m comment --comment "SSH Secure" -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -m state --state NEW,RELATED,ESTABLISHED -$ -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "SSH" -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -m comment --comment "HTTP" -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -m comment --comment "HTTPS" -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -m comment --comment "HTTP Encrypted" -j ACCEP$ -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A INPUT -p udp -m udp --dport 1194 -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 10.8.0.0/24 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -p icmp -m icmp --icmp-type 0 -m state --state RELATED,ESTABLISHED -j A$ COMMIT # Completed on Thu Oct 31 09:47:38 2013 and my nmap scan from: localhost: nmap localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 09:53 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000011s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 443/tcp open https 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds remote pc: nmap [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 09:53 SAST Nmap scan report for rla04-nix1.wadns.net (41.185.26.238) Host is up (0.025s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 15.70 seconds So, I do not know what is causing this, any assistance will be appreciated! UPDATE AFTER FIRST ANSWER::: [root@RLA04-NIX1 ~]# iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT [root@RLA04-NIX1 ~]# iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT [root@RLA04-NIX1 ~]# iptables -A FORWARD -j REJECT [root@RLA04-NIX1 ~]# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE [root@RLA04-NIX1 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] [root@RLA04-NIX1 ~]# service iptables restart iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter nat [ OK ] iptables: Unloading modules: [ OK ] iptables: Applying firewall rules: [ OK ] [root@RLA04-NIX1 ~]# lsof -i :1194 -bash: lsof: command not found iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5252 /* SSH Secure */ ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 state NEW,RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 /* SSH */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 /* HTTP */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 /* HTTPS */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 /* HTTP Encrypted */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1723 ACCEPT 47 -- 0.0.0.0/0 0.0.0.0/0 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 state RELATED,ESTABLISHED Chain vl (0 references) target prot opt source destination [root@RLA04-NIX1 ~]# nmap localhostt Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 11:13 SAST remote pc nmap [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 11:11 SAST Nmap scan report for rla04-nix1.wadns.net (41.185.26.238) Host is up (0.020s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 4.18 seconds localhost nmap localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 11:13 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000011s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 443/tcp open https 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds UPDATE AFTER SCANNING UDP PORTS Sorry, I am noob, I am still learning, but here is the output for: nmap -sU [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 11:33 SAST Nmap scan report for [server address] ([server ip]) Host is up (0.021s latency). Not shown: 997 open|filtered ports PORT STATE SERVICE 53/udp closed domain 123/udp closed ntp 33459/udp closed unknown Nmap done: 1 IP address (1 host up) scanned in 8.57 seconds btw, no changes have been made since post started (except for iptables changes)

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >