Search Results

Search found 3522 results on 141 pages for 'ams 23'.

Page 35/141 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Selenium server won't start

    - by moff
    I'm getting the following error when trying to start selenium: C:\Temp\selenium-server-1.0.3java -jar selenium-server.jar 22:02:07.615 INFO - Java: Sun Microsystems Inc. 16.0-b13 22:02:07.617 INFO - OS: Windows 7 6.1 x86 22:02:07.625 INFO - v2.0 [a2], with Core v2.0 [a2] 22:02:07.811 INFO - RemoteWebDriver instances should connect to: http://127.0.0. 1:4444/wd/hub 22:02:07.813 INFO - Version Jetty/5.1.x 22:02:07.815 INFO - Started HttpContext[/selenium-server/driver,/selenium-server /driver] 22:02:07.817 INFO - Started HttpContext[/selenium-server,/selenium-server] 22:02:07.818 INFO - Started HttpContext[/,/] 22:02:07.866 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@2bbd86 22:02:07.867 INFO - Started HttpContext[/wd,/wd] 22:02:07.870 WARN - Failed to start: [email protected]:4444 Exception in thread "main" org.openqa.jetty.util.MultiException[java.net.SocketE xception: Unrecognized Windows Sockets error: 0: JVM_Bind] at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:686) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(Unknown Source) at java.net.ServerSocket.bind(Unknown Source) at java.net.ServerSocket.(Unknown Source) at org.openqa.jetty.util.ThreadedServer.newServerSocket(ThreadedServer.j ava:391) at org.openqa.jetty.util.ThreadedServer.open(ThreadedServer.java:477) at org.openqa.jetty.util.ThreadedServer.start(ThreadedServer.java:503) at org.openqa.jetty.http.SocketListener.start(SocketListener.java:204) at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:716) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) Java is installed: C:\Temp\selenium-server-1.0.3java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thanks in advance

    Read the article

  • linux + create host file from CSV file by sed or awk or perl

    - by yael
    I have the following CSV file this file defined which Linux machine exist in the system and there ip's my target is to create host file from this file please advice how to create host file as example 1 from my CSV file ( I need to match the IP address from CSV file and put it on the first field of the host file , then match the LINUX name and locate this name in the sec field – as example 1 ) remark - should be performed by sed or awk or perl .. , I need to write the solution in my bash script CSV file , machine , VM-LINUX1 , SZ , Phy , 10.213.158.18 , PROXY , VM-LINUX2 , SZ , 10.213.158.19 , OLD HW , VM-LINUX3 , SZ , 10.213.158.20 , , VM-LINUX4 , SZ , Phy , 10.213.158.21 , , VM-LINUX5 , SZ , Phy , OUT , EXT , LAN3 , 10.213.158.22 , INTERNAL , VM-LINUX6 , SZ , Phy , 10.213.158.23 , , server , new HW , VM-LINUX7 , SZ , Phy , 10.213.158.24 , OUT, LAN3 , VM-LINUX8 , SZ , 10.213.158.25 , OLD HW , machine , VM-LINUX9 , SZ , Phy , INT , 10.213.158.26 , LAN2, AN45, , VM-LINUX10 , SZ , Phy , 10.213.158.27 , , VM-LINUX11 , SZ , Phy , LAN5 , 10.213.158.28 , example 1 ( host file ) 10.213.158.18 VM-LINUX1 10.213.158.19 VM-LINUX2 10.213.158.20 VM-LINUX3 10.213.158.21 VM-LINUX4 10.213.158.22 VM-LINUX5 10.213.158.23 VM-LINUX6 10.213.158.24 VM-LINUX7 10.213.158.25 VM-LINUX8 10.213.158.26 VM-LINUX9 10.213.158.27 VM-LINUX10 10.213.158.25 VM-MACHINE8 10.213.158.26 STAR9 10.213.158.27 TOP10 10.213.158.28 SERVER11

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Pattern matching gnmap fields with SED

    - by Ovid
    I am testing the regex needed for creating field extraction with Splunk for nmap and think I might be close... Example full line: Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 5.9p1 Debian 5ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 2.2.22 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 2.6.32 - 3.2 Seq Index: 257 IP ID Seq: All zeros I've used underscore "_" as the delimiter because it makes it a little easier to read. root@host:/# sed -n -e 's_\([0-9]\{1,5\}\/[^/]*\/[^/]*\/\/[^/]*\/\/[^/]*\/.\)_\n\1_pg' filename The same regex with the escape characters removed: root@host:/# sed -n -e 's_\([0-9]\{1,5\}/[^/]*/[^/]*//[^/]*//[^/]*/.\)_\n\1_pg' filename Output: ... ... ... Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 2.0p1 Debian 2ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 5.4.32 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 9.8.76 - 7.3 Seq Index: 257 IPID Seq: All zeros ... ... ... As you can see, the pattern matching appears to be working - although I am unable to: 1 - match on both the end of line ( comma , and white/tabspace). The last line contains unwanted text (in this case, the OS and TCP timing info) and 2 - remove any of the un-necessary data - i.e. print only the matching pattern. It is actually printing the whole line. If i remove the sed -n flag, the remaining file contents are also printed. I can't seem to locate a way to only print the matched regex. Being fairly new to sed and regex, any help or pointers is greatly appreciated!

    Read the article

  • Formatting an external HDD stuck at 70%

    - by mahmood
    My external HDD which is a 250GB WD (powered by USB) seems to have problem! Whenever i try to copy some files, it stuck while copying. I decided to format it. So I used windows tool and performed the format (not quickly) however at nearly 70% it stuck. Then I decided to perform a low level format with lowlevel. Again it stuck at 70%. I endup that the HDD has bad sector. So is there any tool that mark the bad sectors and bypass them? It is not very reasonable to through 250GB because of some bad sectors! P.S: I saw a similar topic but there were no conclusion there either. The smart data is Attribute, raw value, value, threshold, status Read Error Rate, 50, 200, 51, OK Spin-Up Time, 3275, 154, 21, OK Start/Stop Count, 2729, 98, 0, OK Reallocated Sectors Count,0, 200, 140, OK Seek Error Rate, 0, 100, 51, OK Power-On Hours (POH), 1057, 99, 0, OK Spin Retry Count, 0, 100, 51, OK Recalibration Retries ,0, 100, 51 , OK Power Cycle Count, 1385, 99, 0, OK Power-off Retract Count, 425, 200, 0, OK Load /Unload Cycle Count,12974, 196, 0, OK Temperature, 43, 43, 0, OK Reallocation Event Count,0, 200, 0, OK Current Pending Sector Count,23,200, 0, Degradation Uncorrectable Sector Count, 0, 100, 0, OK UltraDMA CRC Error Count,6, 200, 0, OK Write Error Rate/Multi-Zone Error Rate,0,100,51, OK It seems that the most important thing is this line Current Pending Sector Count,23,200, 0, Degradation Any idea on that?

    Read the article

  • High apache load but zero traffic

    - by Adie
    I have a problem with new server.. I use VPS Centos with 1GB of ram and I use wordpress CMS. The traffic <100 visitor/hour, but the apache have high load and make the server hang with zero free of ram and can't connect through ssh. I should reboot the vps to make it works here is the load on Apache looks like Tasks: 66 total, 1 running, 65 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 12.3%sy, 0.0%ni, 48.1%id, 23.0%wa, 4.8%hi, 10.2%si, 0.0% Mem: 1018776k total, 116620k used, 902156k free, 1236k buffers Swap: 1048568k total, 1013052k used, 35516k free, 26628k cached 2949 apache 20 0 459m 42m 3732 D 3.0 4.2 0:09.23 httpd 2959 apache 20 0 460m 29m 3744 D 2.0 3.0 0:02.72 httpd 2968 apache 20 0 460m 26m 3808 D 2.0 2.6 0:02.27 httpd 2972 apache 20 0 460m 24m 3784 D 2.0 2.5 0:02.44 httpd 2986 apache 20 0 460m 29m 3784 R 2.0 2.9 0:02.40 httpd 2969 apache 20 0 458m 29m 3864 D 1.6 3.0 0:02.63 httpd 2974 apache 20 0 460m 25m 3820 D 1.6 2.6 0:02.43 httpd 2990 apache 20 0 460m 23m 3920 D 1.6 2.4 0:02.36 httpd 2994 apache 20 0 460m 31m 3756 D 1.6 3.2 0:02.62 httpd 2956 apache 20 0 460m 26m 3740 D 1.3 2.7 0:02.73 httpd 2957 apache 20 0 465m 22m 3644 D 1.3 2.3 0:02.80 httpd 2967 apache 20 0 458m 24m 3764 D 1.3 2.5 0:02.60 httpd 2970 apache 20 0 463m 25m 3764 D 1.3 2.6 0:03.07 httpd 2971 apache 20 0 451m 22m 3792 D 1.3 2.3 0:02.47 httpd 2973 apache 20 0 458m 25m 3768 D 1.3 2.6 0:02.52 httpd 2987 apache 20 0 465m 20m 3772 D 1.3 2.1 0:03.02 httpd But sometimes the server have uptime more than 5-10hrs but after that the problems start

    Read the article

  • Unable to compile netmap on Fedora 32 bit

    - by John Elf
    This is the error everytime I try to install netmap: Can someone let me know how to isntall the same on e1000e or ixgbe. I have kernel header and source installed. [root@localhost e1000]# make KSRC=/usr/src/kernels/2.6.35.6-45.fc14.i686/ make -C /usr/src/kernels/2.6.35.6-45.fc14.i686/ M=/media/sf_Shared/netmap-linux/net/e1000 modules make[1]: Entering directory /usr/src/kernels/2.6.35.6-45.fc14.i686' CC [M] /media/sf_Shared/netmap-linux/net/e1000/e1000_main.o /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_tx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:2: error: implicit declaration of function ‘vzalloc’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_rx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1680:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_tx_csum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:2780:2: error: implicit declaration of function ‘skb_checksum_start_offset’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_rx_checksum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:3689:2: error: implicit declaration of function ‘skb_checksum_none_assert’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_restore_vlan’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: error: ‘VLAN_N_VID’ undeclared (first use in this function) /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/media/sf_Shared/netmap-linux/net/e1000/e1000_main.o] Error 1 make[1]: *** [_module_/media/sf_Shared/netmap-linux/net/e1000] Error 2 make[1]: Leaving directory/usr/src/kernels/2.6.35.6-45.fc14.i686' make: * [all] Error 2

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • What is wrong with my Watcher (incron-like) daemon?

    - by eric01
    I have installed Watcher this way: both watcher.py and watcher.ini are located in /etc I also installed pyinotify and it does work when I use python -m pyinotify -v /var/www However, I want to use the daemon and when I start watcher.py, I get weird lines on my watcher.log (see below). I also included my watcher.ini file. Note: I have the latest version of Python. The watcher.py can be found here What is wrong with what I did? Also, do I really need pyinotify? Thanks a lot for your help watcher.ini: [DEFAULT] logfile=/var/log/watcher.log pidfile=/var/run/watcher.pid [job1] watch=/var/www events=create,delete,modify recursive=true command=mkdir /home/mockfolder ## just using this as test watcher.log: 2012-09-22 04:28:23.822029 Daemon started 2012-09-22 04:28:23.822596 job1: /var/www Traceback (most recent call last): File "/etc/watcher.py", line 359, in <module> daemon.start() File "/etc/watcher.py", line 124, in start self.run() File "/etc/watcher.py", line 256, in run autoadd = self.config.getboolean(section,'autoadd') File "/usr/lib/python2.7/ConfigParser.py", line 368, in getboolean v = self.get(section, option) File "/usr/lib/python2.7/ConfigParser.py", line 618, in get raise NoOptionError(option, section) ConfigParser.NoOptionError: No option 'autoadd' in section: 'job1'

    Read the article

  • Process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process sending out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome.

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Some process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process send out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome. Thanks in advance

    Read the article

  • Does MySQL log successful or attempted queries?

    - by Nathan Long
    I'm trying to track down a hit-or-miss bug in a web application. Sometimes a request completes just fine; sometimes it hangs and never finishes. I see that Apache now has several requests listed on the server-status page as "sending reply," and that doesn't change. I'm testing on localhost, so there shouldn't ever be more than one. Out of curiosity, I set MySQL to log all queries and I'm tail -fing the log file. When things go OK, I see a pattern like this: 20 Connect root@localhost on dbname 20 Query (some query #1) 20 Query (some query #2) (etc) 20 Quit 21 Connect (etc) When it hangs, I see a pattern like this: 22 Connect root@localhost on dbname 22 Query (some query #1) //nothing happens, so I try the post again 23 Connect root@localhost on dbname 23 Query (some query #1) //nothing happens; try again 24 Connect (etc) Here's my question: is MySQL logging attempted queries, or successful queries? In other words, if the last line I see is query #1, does that imply that query #1 or query #2 is hanging? My guess is that the one I don't see is the problem, because the last one I see looks fine, but maybe the one I don't see is too screwed-up for MySQL to process. Thoughts?

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • checksecurity / setuid changes, is this a bug or did somebody break in?

    - by Fabian Zeindl
    I received a mail by checksecurity from my ubuntu 12.04 server with the following content: --- setuid.today 2012-06-03 06:48:09.892436281 +0200 +++ /var/log/setuid/setuid.new.tmp 2012-06-17 06:47:51.376597730 +0200 @@ -30,2 +30,2 @@ - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudo - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudoedit + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudo + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudoedit @@ -42 +42 @@ - 130507 666 1 root root 0 Sat Jun 2 18:04:57.0752979385 2012 ./var/spool/postfix/dev/urandom + 130507 666 1 root root 0 Mon Jun 11 08:47:16.0919802556 2012 ./var/spool/postfix/dev/urandom First i was worried, then i realized that the change was actually 2 weeks ago, i think there was a sudo-update back then. Since checksecurity runs in /etc/cron.daily i wondered why i only get that email now. I looked into /var/log/setuid/ and found the following files: total 32 -rw-r----- 1 root adm 816 Jun 17 06:47 setuid.changes -rw-r----- 1 root adm 228 Jun 3 06:48 setuid.changes.1.gz -rw-r----- 1 root adm 328 May 27 06:47 setuid.changes.2.gz -rw-r----- 1 root root 1248 May 20 06:47 setuid.changes.3.gz -rw-r----- 1 root adm 4473 Jun 17 06:47 setuid.today -rw-r----- 1 root adm 4473 Jun 3 06:48 setuid.yesterday The obvious thing that confuses me is that the file setuid.yesterday is not from yesterday = Jun/16. Is this a bug?

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • How can the route between two private IPs go via public IPs?

    - by Gilles
    I'm trying to understand what this output from traceroute means. I changed the IP addresses for privacy but retained the public/private IP range distinction. traceroute.db -e -n 10.1.1.9 traceroute to (10.1.1.9), 30 hops max, 60 byte packets 1 10.0.0.1 0.596 ms 0.588 ms 0.577 ms 2 10.0.0.2 1.032 ms 1.029 ms 1.084 ms 3 10.0.0.3 3.360 ms 3.355 ms 3.338 ms 4 23.0.0.4 3.974 ms 4.592 ms 4.584 ms 5 23.0.0.5 13.442 ms 13.445 ms 13.434 ms 6 45.0.0.6 13.195 ms 12.924 ms 12.913 ms 7 67.0.0.7 52.088 ms 51.683 ms 52.040 ms 8 10.1.1.8 46.878 ms 44.575 ms 44.815 ms 9 10.1.1.9 45.932 ms 45.603 ms 45.593 ms The first 10.0.* range is inside my organisation. The last 10.1.* range is another site of my organisation. The intermediate addresses belong to various ISPs. I expect that there is some kind of VPN between the two sites, but I don't know much about our network topology. What I don't understand is how the route can go from a private address through public addresses back into private addresses. Searching led me to Public IPs on MPLS Traceroute, which gives a possible explanation: MPLS. Is MPLS the only possible or most likely explanation? Otherwise what does this tell me about our network infrastructure? Bonus question for my edification: in this scenario, who is generating the ICMP TTL exceeded packets and if relevant mangling their source and destination addresses?

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • Some of my keys are automatically being pressed along with other keys

    - by Santosh
    History The last time when my computer shutdown was a power failure. Now some keys are automatically being pressed when I type something. The last thing I did to keyboard setting was adding a keyboard layout (on Ubuntu). What is happening Whenever I press c, xc is writeen s gives me sd d gives me sd e gives me we 2 gives me 23, So when I want @ it gives me @# 3 gives me 23 Pressing CAPS Lock gives me F3 and vice-versa. All other key are either working fine or I don't use them. I have two operating system Ubuntu and Windows, I use Windows very less and found this problem on Ubuntu, but as soon as I logged in to Windows (for checking) then I found that Windows has the same problem. Effects on my life This starts form the time of login, even I have problem in typing my password. Whenever I try to save any webpage, it is bookmarked automatically. Whenever I copy, it is cut automatically. I have to spend more than half of time correcting what I have typed. Note: Typing thisd quwesdtion wasd rweally a big pain to mwe.

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >