Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 454/1328 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Shorewall SHOW DYNAMIC command doesn't work

    - by Andrew Burns
    Setting up shorewall dynamic zones, http://shorewall.net/Dynamic.html shows the command shorewall show dynamic zone where zone is one of your zones. I can get the add and delete commands to work, but not the show dynamic command. Here is a shell session, with output from ipset list that proves that the items are indeed there. $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16520 References: 66 Members: 192.168.85.153 $ shorewall add br0:192.168.85.200 CPREM Host br0:192.168.85.200 added to zone CPREM $ shorewall show dynamic CPREM $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16536 References: 66 Members: 192.168.85.153 192.168.85.200 $ shorewall delete br0:192.168.85.200 CPREM Host br0:192.168.85.200 deleted from zone CPREM $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16536 References: 66 Members: 192.168.85.153 I am using the packaged version from Ubuntu 12.04 (4.4.26.1-1)

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • Why is my ethernet interface in promiscuous mode

    - by nhed
    I read that seeing a flag of M in netstat -i is the way to tell which of your interfaces is in promiscuous mode I run it and I see that eth1 is in promiscuous mode $ netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0 1770161198 0 0 0 57446481 0 0 0 BMRU lo 16436 0 97501566 0 0 0 97501566 0 0 0 LRU This seems to be the case on all the machines I checked (All Centos6.0, both virtual and physical), any idea why ethernet devices would be in such a mode unless someone was running any pcap based app (sudo lsof | grep pcap shows nothing)? I did not see any mention of promiscuous in any of the config files (sudo grep -r promis /etc) Any ideas what puts the interface into that mode and why? p.s. most of the posts I see seem to be security related, this is not that

    Read the article

  • How to mount remote sambe from local host with multiple groups ?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 `$ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has ? Is there a way to: 1) Mount the remote samba with multiple groups on the local system ? 2) Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. An the last solution would be nfs but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • Debian: video problems with VLC

    - by kemp
    I have a problem playing AVI divx files with VLC. Yesterday upon start VLC showed an error message complaining about divx codecs and refused to proceed. Today the player starts but the video is squashed horizontally (it occupies roughly 1/4th of the VLC window which is for the other part black). I'm on an updated testing system, and before yesterday VLC was playing fine. In the meantime - after a recent dist-upgrade - I cannot run X with proprietary ATI drivers anymore, I have to use the FOSS radeonhd ones instead. I don't know if that's related but I thought it could be worth mentioning (and by the way if anyone has suggestions about this problem too, that'd be very much appreciated). How can I fix VLC's problem?

    Read the article

  • My DNS works! But, what is the simplest way to add something to it?

    - by Alex
    This is my current DNS example.com.db zone file. I followed a tutorial. It works, because when I point to this DNS from another server via resolve.conf, it will actually forward me to the right IP when I do "ping example.com". ; ; BIND data file for example.com ; $TTL 604800 @ IN SOA example.com. info.example.com. ( 2007011501 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800) ; Default TTL ; @ IN NS ns1.example.com. @ IN NS ns2.example.com. example.com. IN MX 10 mail.example.com. example.com. IN A 192.168.254.1 www IN CNAME example.com. mail IN A 192.168.254.1 ftp IN CNAME example.com. example.com. IN TXT "v=spf1 ip4:192.168.254.1 a mx ~all" mail IN TXT "v=spf1 a -all" Right now, ping example.com....goes to 192.168.254.1. That's great!!! it works! My question is--how can I add something do this file so that when my other servers: ping dbserver1....goes to 44.245.66.222 ping cacheserver1 ....goes to 38.221.44.555 I want to use it like a universal hosts file for my machines.

    Read the article

  • How to query a DHCP server to get the local DNS servers

    - by Dan Berlyoung
    I have a ClarkConnect (CentOS based) box running as my home router on a RR connection. I had the DNS servers set up to use Google's DNS server. I want to change them back to the local DNS servers but I can't find an obvious/easy way to get those address short of a) reconfiguring the router's network to DHCP them (would rather not interrupt everyone) or b) calling their tech support (kill me now!). Is there a command line tool/command I can use to query the DHCP server on the external NIC to see what DNS servers it would set me up with w/o munging my existing setup?

    Read the article

  • What are potential reasons a user could be connected to a home network, but not to the internet?

    - by Matthew
    I have a friend that recently started using Ubuntu, and I've been answering his questions via the internet. However, I'm stuck on this one. He bought Linksys WPC11 wireless card, and says he was able to create a network connection, but was unable to ping or use a browser. I'm not quite sure where to start in figuring this out--what are some common causes of this sort of problem?

    Read the article

  • Using cookies with lynx

    - by XXL
    lynx -cfg=cfg.file $URL this works with the following contents of the .cfg file: SET_COOKIES:TRUE ACCEPT_ALL_COOKIES:TRUE PERSISTENT_COOKIES:TRUE COOKIE_FILE:cookie.file however, this does not: lynx -cookies=1 -accept_all_cookies=1 -cookie_file=cookie.file $URL if it's going to be of any help - here's the trace: parse_arg(arg_name=-cookies=1, mask=1, count=2) parse_arg lookup(cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-accept_all_cookies=1, mask=1, count=3) parse_arg lookup(accept_all_cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-cookie_file=cookie.file, mask=1, count=4) parse_arg lookup(cookie_file=cookie.file) ...skip (mask 1/4) parse_arg(arg_name=$URL, mask=1, count=5) parse_arg startfile:$URL obvious question, why? the actual difference, from what i see, is the inability to trigger "PERSISTENT_COOKIES:TRUE" by command-line options in lynx. or, maybe, i have overlooked/misunderstood something?

    Read the article

  • Taking a screencast in Backtrack 4

    - by user30196
    I'm working on a tutorial using Backtrack 4 Live USB, and I would like to take a screencast of what I'm doing (not just screenshots) So far I have tried these application with limited success: -recordmydesktop -xvidcap -wink -istanbul -vlc -vnc2flv Each time I try the resulting files are generally choppy (at best 1 frame per second) and most don't even end up with a clear view of the screen each time. If anyone has suggestions for the screencast I would greatly appreciate it.

    Read the article

  • How to Configure Sendmail / Webmin for second IP?

    - by user310594
    Hi, LAMP Centos5.4 Webmin Until recently I have had all domains using "server1.example.com" Now I have newdomain.com on second.ip.address.works (works for DNS that is) Please tell me how to setup sendmail so the mail is sent from the second ip address? This is new for me: IF I need to create a second server called "server2.domain2.com", then please tell exactly how since I'm only experienced with one server per VPS. Whether "server2.domain2.com" needs to be created or not, here is exactly what is needed: # Mail being sent from domains using ns1.example.com needs to be sent from that server and that IP. Mail being sent from domains using nsother.example2.com sent from that IP + how to set up the second server / hostname, if needed. Thank you.

    Read the article

  • How to share drive space from vmware server 2 host to a guest?

    - by matnagel
    In the vmware tools in the guest there is an option to access shares from the host. What is the way to create such shares on a vmware 2 host? I did not find where in infrastructure web access. I also went through the vmware server 2 user's guide but did not see it mentioned. Can you help? This is an ubuntu 64 bit server 8.04 LTS host.

    Read the article

  • Webcam error (libv4lconvert) while capturing VIDEO

    - by shadyabhi
    I get the following when I capture images using my web-cam.. libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff ...(same error repeating) ... I also had issue that my camera was not getting detected in ubuntu. So, in order to run an application that uses the webcam, I have to run a command like LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so ./6dofhand Whats causing these errors?

    Read the article

  • How to reliably keep an SSH tunnel open?

    - by Peltier
    I use an SSH tunnel from work to go around various idotic firewalls (it's ok with my boss :)). The problem is, after a while the ssh connection usually hangs, and the tunnel is broken. If I could at least monitor the tunnel automatically, I could restart the tunnel when it hangs, but I haven't even figured a way of doing that. Bonus points for the one who can tell me how to prevent my ssh connection from hanging, of course!

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Why does my Mac address reset after reconnecting?

    - by Mr.Student
    I have ubuntu 12. I'm changing my mac address with ifconfig wlan0 hw ether xx:xx:xx:xx:xx:xx which works. However when I restart my connection my computer resets my mac to my original mac address. I'm guessing that this happens because something calls... ifconfig wlan0 down ... do something before connecting ifconfig wlan0 up ... connect to designated access point I want my mac address to however be the same no matter how many times I disconnect and reconnect, whether to another network or the same one. Also it would be nice to turn off the auto-connect feature for my network-manager with out having to edit each individual connection. Lastly I would like to know how to connect to a wifi network through the terminal and not via gui network manager ubuntu provides.

    Read the article

  • SSH connection falling down

    - by kappa
    I've set up a connection with autossh that creates some tunnels at system startup, but if I try to connect, after successful login (with RSA key) connection fall down, here a trace: debug1: Authentication succeeded (publickey). debug1: Remote connections from LOCALHOST:5006 forwarded to local address localhost:22 debug1: Remote connections from LOCALHOST:6006 forwarded to local address localhost:80 debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 5006, connect localhost:22 debug1: remote forward success for: listen 6006, connect localhost:80 debug1: All remote forwarding requests processed debug1: Sending environment. debug1: Sending env LANG = it_IT.UTF-8 debug1: Sending env LC_CTYPE = en_US.UTF-8 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2400, received 2312 bytes, in 1.3 seconds Bytes per second: sent 1904.2, received 1834.4 debug1: Exit status 1 What can be the problem? All this stuff is managed by a script already running on another machine (creating reverse tunnels on the same machine but with different ports)

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • mount multiple folders with nfs4 on centos

    - by microchasm
    I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server) /etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync) /var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra (Machine 2 - client) /etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1 mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated.

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >