Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 423/991 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Keep Uploaded Files in Sync Across Multiple Servers - LAMP

    - by Dfranc3373
    I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server. The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers? I do not want to store images directly in a database as our application is database intensive already. Is there a way to sync the servers across each other or is there something else I can do? Any help would be appreciated. Thanks

    Read the article

  • ubuntu usb that can be booted on mac and windows?

    - by An Original Alias
    I am a little frustrated having to use to separate USBs for booting on mac and windows. Is there a way that I can have the version for mac and the version for windows on one USB? or even better, one version that can be booted on both? I have heard of multibootiso, but I'm not entirely sure that it will work on imac. If needed, I am willing to use terminal to make this happen, even if its a long complicated process.

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • Apache Virtual Hosts behind Cisco Router

    - by Theo
    I'm setting up an Apache 2.2 Ubuntu web server for internal services that is also supposed to be accessed from outside our LAN. Our LAN has a single external IP that is the external IP of our RV042 Cisco router. We have set up several A records on our external DNS server that point to this IP. Our internal DNS server resolve the same records to the internal IP of our web server, so computers from inside the network can access them using the same address as if they were outside. We forwarded the router's external 80 port to our web server's 80 port. I have set up one Virtual Host for each domain name in our list, and my httpd.conf is something like this: ServerName web.domain.com NameVirtualHost *:80 <VirtualHost *:80> ServerName alfresco.domain.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://localhost:8080/alfresco ProxyPassReverse /alfresco http://localhost:8080/alfresco ProxyPass /share http://localhost:8080/share ProxyPassReverse /share http://localhost:8080/share </VirtualHost> <VirtualHost *:80> ServerName crm.domain.com DocumentRoot /var/www/sugarcrm </VirtualHost> Now, this works if we are in our LAN. However, if we are outside of our LAN we reach our web server's default page saying: It Works! This is the default web page for this server. But we can't reach the virtual hosts, as if the domain name is not being preserved when the router forward the packets to the web server. Am I doing something wrong? How can I check what is going on? What should be the settings to make this work from outside?

    Read the article

  • Dealing with SMTP invalid command attack

    - by mark
    One of our semi-busy mail servers (sendmail) has had a lot of inbound connections over the past few days from hosts that are issuing garbage commands. In the past two days: incoming smtp connections with invalid commands from 39,000 unique IPs the IPs come from various ranges all over the world, not just a few networks that I can block the mail server serves users throughout north america, so I can't just block connections from unknown IPs sample bad commands: http://pastebin.com/4QUsaTXT I am not sure what someone is trying to accomplish with this attack, besides annoy me. any ideas what this is about, or how to effectively deal with it?

    Read the article

  • How to open port 25 on the server

    - by liuxingruo
    I'm using centOS. I want to implement a smtp mail server, and I have installed postfix and dovecot(both have been set correctly). I tried to telnet the 25 port, but it returns Unable to connect to remote host: Connection refused So, How can I open the 25 port? Thanks!

    Read the article

  • Find hosted directories/ports in Jetty/Apache

    - by Paul Creasey
    Hi, I first asked this on SO, but i didn't get a response and i think it is probably more appropriate here. Let say I have a directory which is being hosted by Jetty or Apache (i'd like an answer for both), i know the URL including the port and i can log into the server. How can i find the directory that is being hosted by a certain port? I'd also like to go the other way, i have a folder on the server, which i know if being hosted, but i don't know the port so i can't find it in a web browser. How can i find a list of directories that are being hosted? This has been bugging me for ages but i've never bothered to ask before! Thanks.

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

  • Why does deleting from the command line take significantly less time than from a GUI?

    - by Jordan Plahn
    So this is probably the dumbest question you'll read today, but it's something I just wondered about as I was deleting a dozen or so images from my computer. With a quick rm -rf command on the directory's contents, all the images were gone in a snap. When I drag the same dozen or so images to a trash can/recycle ban, it takes sometimes 10 seconds or more. Now I'm sure some of it comes from the overhead of the GUI and such, and some of it may be the fact that the file still "exists" in some form if it's put into the recycle bin, but is there anything else that accounts for such a huge time disparity? Are "rm" and "delete" just such fundamentally different commands so I'm trying to compare apples and oranges? Enlighten me, please!

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • why page is automatically redirecting to some other sites

    - by raj
    In my browser (Firefox 10.0.7) the page is automatically redirect to some other sites without clicking any link. If I enter the superuser.com url after pressing Enter button, It redirect to some other sites. sometimes while refreshing also the page is redirect to some other site. It's redirecting to this sites http://result.seenfind.com/ncp/Default.aspx?term=gatlinburg%20cabin&u=1000670913 http://search.cpvee.com/search.php?q=gatlinburg+cabin&y=&f=2168&s= http://www.insidecelebritygossip.com/ I cleared all history and all but still same problem. I am using CentOS 6.3

    Read the article

  • How do I get rid of sockets in FIN_WAIT1 state?

    - by Gert M
    I have a port that is blocked by a process I needed to kill. (a little telnet daemon that crashed) The process was killed successfully but the port is still in a 'FIN_WAIT1' state. It doesn't come out of it, the timeout for that seems to be set to 'a decade'. The only way I've found to free the port is to reboot the entire machine, which is ofcourse something I do not want to do. $ netstat -tulnap | grep FIN_WAIT1 tcp 0 13937 10.0.0.153:4000 10.0.2.46:2572 FIN_WAIT1 - Does anyone know how I can get this port unblocked without rebooting?

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • Linux : Ubuntu n'est plus la distribution la plus populaire d'après DistroWatch, la faute à Unity ? Montée fulgurante de Mint

    Linux : Ubuntu n'est plus la distribution la plus populaire d'après DistroWatch La faute à Unity ? Montée fulgurante pour Mint La distribution Linux Ubuntu perd du terrain selon le rapport annuel de DistroWatch mis à jour cette semaine. Sur les 12 derniers mois, c'est Mint Linux qui domine le tableau des distributions les plus populaires. Ubuntu y arrive deuxième, mais son déclin s'accélère au dernier mois pour se classer quatrième derrière Fedora,

    Read the article

  • Tracing out going connections

    - by Tiffany Walker
    Jan 24 07:00:49 HOST kernel: [875997.380464] Firewall: *TCP_OUT Blocked* IN= OUT=eth0 SRC=108.60.11.15 DST=74.80.225.32 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=18789 DF PROTO=TCP SPT=64823 DPT=81 WINDOW=14600 RES=0x00 SYN URGP=0 Jan 24 07:00:50 HOST kernel: [875998.378321] Firewall: *TCP_OUT Blocked* IN= OUT=eth0 SRC=108.60.11.15 DST=74.80.225.32 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=18790 DF PROTO=TCP SPT=64823 DPT=81 WINDOW=14600 RES=0x00 SYN URGP=0 I run fcgid so everything runs as a user. But is there a way to trace and figure out who is running an out going script? The sites all share the same IP so it's hard to know which site it is or where the script is located at.

    Read the article

  • NAT via iptables and virtual interface

    - by Alex
    I'm trying to implement the following scenario: One VM-host, multiple guest VMs, each one gets its own IP-address (and domain). Our server has only one physical interface, so the intended use is to add virtual interfaces on eth0. To complicate our situation the provider uses port-security on their switches, so I can't run the guest interfaces in bridged mode, because then the switch detects a "spoofed" MAC-address and kills the interface (permanently, forcing me to call the support, which I'm sure will get them a little bit angry the third time ;) ). My first guess was to use iptables and NAT to forward all packages from one virtual interface to another one, but iptables doesn't seem to like virtual interfaces (at least I can't get it to work properly). So my second guess is to use the source IP of the packages to the public interface. Let's assume libvirt creates a virbr0-network with 192.168.100.0/24 and the guest uses 192.168.100.2 as IP-address. This is what I tried to use: iptables -t nat -I PREROUTING --src public_ip_on_eth0:0 -p tcp --dport 80 -j DNAT --to-destination 192.168.100.2:80 That doesn't give me the intended results either (accessing the server times out). Is there a way to do what I'm trying to do, or even to route all traffic to a certain IP on a virtual interface to the VM's device?

    Read the article

  • Sending a file to uCLinux

    - by Mike
    I have a board running uClinux, and I need to send a file to it, but I'm not sure how... I have a RS232 port, and Ethernet port with an IP address at my disposal, I can telnet to the board, but uCLinux doesn't have a built in FTP client. What sort of options do I have here for transferring files from my Windows 7 (or OpenSUSE) machine(s) to this board? EDIT I just found I have a TFTP server on it: # tftp BusyBox v0.60.5 (2012.07.09-14:05+0000) multi-call binary Usage: tftp [OPTION]... HOST [PORT] But I can't find any good information on how to use TFTP. And looking around google I'm seeing it's good for loading binary images, so I'm assume anything could be sent, but I'm not sure.

    Read the article

  • VMware - Broadcom 1000Gbps NIC does not link at 100Mbps to a Cisco switch port

    - by Spirit
    Today we've stumbled on a very awkward situation with our VMWare Server. The server is with ESX 3.5 that has a 1Gbps NIC. We bought a brand new managed Cisco Linksys switch with 10/100Mbps interface ports but when we plugged the cable in one of the ports the link simply does not wanted to activate :S... Does anyone with more VMware experience have ever had similar problem? From what I know is that 1Gbps NICs are backwards compatible with 100Mbps switches. This is what we've tryed so far but with no success: Tryed: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004089 Tryed to modify the /etc/modules.conf folowing the guide from this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=813 After the changes I have restarted the networking services using # service network restart, # service mgmt-vmware restart and # service vmware-vpxa restart It seems that no matter how many times, or whatever approach/method (GUI or Shell) we try to change the speed and duplex of the network adapter and to force it to 100mbps it only accepts 1Gbps .. I am starting to go nuts :@

    Read the article

  • I'm trying to set up a LAMP server so it's totally anonymous, any suggestions?

    - by flexterra
    I'm going to set up a web service which will use the LAMP stack. One of the most important features of the site is that it should be anonymous. We thought that a cool thing will be if the server didn't made any logs that could potentially identify a user. I'm working on a web app for a news organization. They want a site to allow people to sumbit news leads and tips (text / files) to journalists. We think if we can provide good anonymity people will be more inclined to provide information. We will also teach how to use stuff like TOR as an extra precaution for whistleblowers Is this even possible? Any suggestions of obscure things we should look into?

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >