Search Results

Search found 1683 results on 68 pages for '06'.

Page 28/68 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • unable to sniff traffic despite network interface being in monitor or promiscuous mode

    - by user65126
    I'm trying to sniff out my network's wireless traffic but am having issues. I'm able to put the card in monitor mode, but am unable to see any traffic except broadcasts, multicasts and probe/beacon frames. I have two network interfaces on this laptop. One is connected normally to 'linksys' and the other is in monitor mode. The interface in monitor mode is on the right channel. I'm not associated with the access point because, as I understand, I don't need to if using monitor mode (vs promiscuous). When I try to ping the router ip, I'm not seeing that traffic show up in wireshark. Here's my ifconfig settings: daniel@seasonBlack:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:29:9e:b2:89 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8518 (8.5 KB) TX bytes:8518 (8.5 KB) wlan0 Link encap:Ethernet HWaddr 00:21:00:34:f7:f4 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:ff:fe34:f7f4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9758 errors:0 dropped:0 overruns:0 frame:0 TX packets:4869 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3291516 (3.2 MB) TX bytes:677386 (677.3 KB) wlan1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-33-34-00-00-00-00-00-00-00-00 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:1500 Metric:1 RX packets:112754 errors:0 dropped:0 overruns:0 frame:0 TX packets:101 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18569124 (18.5 MB) TX bytes:12874 (12.8 KB) wmaster0 Link encap:UNSPEC HWaddr 00-21-00-34-F7-F4-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's my iwconfig settings: daniel@seasonBlack:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:F8:D6:17:34 Bit Rate=54 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Noise level=-69 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 wmaster1 no wireless extensions. wlan1 IEEE 802.11bg Mode:Monitor Frequency:2.437 GHz Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Here's how I know I'm on the right channel: daniel@seasonBlack:~$ iwlist channel lo no frequency information. eth0 no frequency information. wmaster0 no frequency information. wlan0 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6) wmaster1 no frequency information. wlan1 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6)

    Read the article

  • Facing issues in setting up VPN connection(IKEv1) using iphone (Defult Cisco VPN client) and Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using iPhone (Defult Cisco VPN client) and Strongswan 4.5.0 server. The Strongswan server is running on Ubuntu Linux, which is connected to some wifi hotspot. This is the guide which was used. I generated CA, server and client certificate, with the only difference mentioned below. “While generating server certificate, as per link CN=vpn.strongswan.org instead of this I changed CN name to CN=192.168.43.212.” Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on iphone. After installation I notice that certificates are considered as trusted also. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212 Iphone eth0 interface ip address: 192.168.43.72. iphone is also attached with the same wifi hotspot. Below is the snapshot of client configurations. Description Strong swan Server 192.168.43.212 Account ipsecvpn Password ***** Use certificate ON Certificate client The above username and password are in sync with the ipsec.secrets file. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.72 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add With the above configurations when I enable VPN on iphone, it says Could not able to verify server certificate. I ran Wireshark on a Linux server and observe that initially some ISAKMP message exchanges happens between client and server, which are successful but before authorization, client is sending some informational message and soon after this client is showing error as popup Could not able to verify server certificate. Capture logs on Strongswan server and in server logs below errors are observed: From auth.log Apr 25 20:16:08 Linux pluto[4025]: | ISAKMP version: ISAKMP Version 1.0 Apr 25 20:16:08 Linux pluto[4025]: | exchange type: ISAKMP_XCHG_INFO Apr 25 20:16:08 Linux pluto[4025]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 25 20:16:08 Linux pluto[4025]: | message ID: 9d 1a ea 4d Apr 25 20:16:08 Linux pluto[4025]: | length: 76 Apr 25 20:16:08 Linux pluto[4025]: | ICOOKIE: f6 b7 06 b2 b1 84 5b 93 Apr 25 20:16:08 Linux pluto[4025]: | RCOOKIE: 86 92 a0 c2 a6 2f ac be Apr 25 20:16:08 Linux pluto[4025]: | peer: c0 a8 2b 48 Apr 25 20:16:08 Linux pluto[4025]: | state hash entry 8 Apr 25 20:16:08 Linux pluto[4025]: | state object not found Apr 25 20:16:08 Linux pluto[4025]: **packet from 192.168.43.72:500: Informational Exchange is for an unknown (expired?) SA** Apr 25 20:16:08 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 8 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | *time to handle event Apr 25 20:16:16 Linux pluto[4025]: | event after this is EVENT_RETRANSMIT in 2 seconds Apr 25 20:16:16 Linux pluto[4025]: | handling event EVENT_RETRANSMIT for 192.168.43.72 "ios1" #8 Apr 25 20:16:16 Linux pluto[4025]: | sending 76 bytes for EVENT_RETRANSMIT through wlan0 to 192.168.43.72:500: Apr 25 20:16:16 Linux pluto[4025]: | a6 a5 86 41 4b fb ff 99 c9 18 34 61 01 7b f1 d9 Apr 25 20:16:16 Linux pluto[4025]: | 08 10 06 01 e9 1c ea 60 00 00 00 4c ba 7d c8 08 Apr 25 20:16:16 Linux pluto[4025]: | 13 47 95 18 19 31 45 30 2e 22 f9 4d 85 2c 27 bc Apr 25 20:16:16 Linux pluto[4025]: | 9e 9b e1 ae 1e 35 51 6f ab 80 f5 73 3c 15 8d 20 Apr 25 20:16:16 Linux pluto[4025]: | 4b 46 47 86 50 24 3f 13 15 7d d5 17 Apr 25 20:16:16 Linux pluto[4025]: | inserting event EVENT_RETRANSMIT, timeout in 40 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 2 seconds for #10 Apr 25 20:16:16 Linux pluto[4025]: | rejected packet: Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | control: Apr 25 20:16:16 Linux pluto[4025]: | 30 00 00 00 00 00 00 00 00 00 00 00 0b 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 6f 00 00 00 02 03 03 00 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 02 00 00 00 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | name: Apr 25 20:16:16 Linux pluto[4025]: | 02 00 01 f4 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: **ERROR: asynchronous network error report on wlan0 for message to 192.168.43.72 port 500, complainant 192.168.43.72: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]** Anybody please provide some update about this error and how to solve this issue.

    Read the article

  • KVM + Cloudmin + IpTables

    - by Alex
    I have a KVM virtualization on a machine. I use Ubuntu Server + Cloudmin (in order to manage virtual machine instances). On a host system I have four network interfaces: ebadmin@saturn:/var/log$ ifconfig br0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 inet addr:192.168.0.253 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::1278:d2ff:feec:1638/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:589337 errors:0 dropped:0 overruns:0 frame:0 TX packets:334357 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:753652448 (753.6 MB) TX bytes:43385198 (43.3 MB) br1 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16995 errors:0 dropped:0 overruns:0 frame:0 TX packets:13309 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2059264 (2.0 MB) TX bytes:1763980 (1.7 MB) eth0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:610558 errors:0 dropped:0 overruns:0 frame:0 TX packets:332382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:769477564 (769.4 MB) TX bytes:44360402 (44.3 MB) Interrupt:20 Memory:fe400000-fe420000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:239632 errors:0 dropped:0 overruns:0 frame:0 TX packets:239632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50738052 (50.7 MB) TX bytes:50738052 (50.7 MB) tap0 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17821 errors:0 dropped:0 overruns:0 frame:0 TX packets:13703 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2370468 (2.3 MB) TX bytes:1782356 (1.7 MB) br0 is connected to a real network, br1 is used to create a private network shared between guest systems. Now I need to configure iptables for network access. First of all I allow ssh sessions on port 8022 on the host system, then I allow all connections in state RELATED, ESTABLISHED. This is working ok. I install another system as guest, it's IP address is 192.168.10.2, and now I have two problems: I want to allow the access from this host to the outside world, cannot accomplish this. I can ssh from the host. I want to be able to ssh to the guest from the outside world using 8023 port. Cannot accomplish this. Full iptables configuration is following: ebadmin@saturn:/var/log$ sudo iptables --list [sudo] password for ebadmin: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:8022 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level warning Chain FORWARD (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning Chain OUTPUT (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning ebadmin@saturn:/var/log$ sudo iptables -t nat --list Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp spt:8023 to:192.168.10.2:22 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination The worst of all is that I don't know how to interpret iptables logs. I don't see the final decision of the firewall. Need help urgently.

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • Debian virtual memory reaching limit

    - by Gregor
    As a relative newbie to systems, I inherited a Debian server and I've noticed that virtual memory is very high (around 95%!). The server has been running slow for around 6 months, and I was wondering if any of you had any tips on things I could try, particularly on freeing up memory. The server hosts various websites and also a Postit email server. Here are the details: Operating system Debian Linux 5.0 Webmin version 1.580 Time on system Thu Apr 12 11:12:21 2012 Kernel and CPU Linux 2.6.18-6-amd64 on x86_64 Processor information Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz, 2 cores System uptime 229 days, 12 hours, 50 minutes Running processes 138 CPU load averages 0.10 (1 min) 0.28 (5 mins) 0.36 (15 mins) CPU usage 14% user, 1% kernel, 0% IO, 85% idle Real memory 2.94 GB total, 1.69 GB used Virtual memory 3.93 GB total, 3.84 GB used Local disk space 142.84 GB total, 116.13 GB used Free m output: free -m total used free shared buffers cached Mem: 3010 2517 492 0 107 996 -/+ buffers/cache: 1413 1596 Swap: 4024 3930 93 Top output: top - 11:59:57 up 229 days, 13:38, 1 user, load average: 0.26, 0.24, 0.26 Tasks: 136 total, 2 running, 134 sleeping, 0 stopped, 0 zombie Cpu(s): 3.8%us, 0.5%sy, 0.0%ni, 95.0%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3082544k total, 2773160k used, 309384k free, 111496k buffers Swap: 4120632k total, 4024712k used, 95920k free, 1036136k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28796 www-data 16 0 304m 68m 6188 S 8 2.3 0:03.13 apache2 1 root 15 0 10304 592 564 S 0 0.0 0:00.76 init 2 root RT 0 0 0 0 S 0 0.0 0:04.06 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:05.67 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:00.06 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:01.26 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root 10 -5 0 0 0 S 0 0.0 0:00.12 events/0 9 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1 10 root 10 -5 0 0 0 S 0 0.0 0:00.00 khelper 11 root 10 -5 0 0 0 S 0 0.0 0:00.02 kthread 16 root 10 -5 0 0 0 S 0 0.0 0:15.51 kblockd/0 17 root 10 -5 0 0 0 S 0 0.0 0:01.32 kblockd/1 18 root 15 -5 0 0 0 S 0 0.0 0:00.00 kacpid 127 root 10 -5 0 0 0 S 0 0.0 0:00.00 khubd 129 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod 180 root 10 -5 0 0 0 S 0 0.0 70:09.05 kswapd0 181 root 17 -5 0 0 0 S 0 0.0 0:00.00 aio/0 182 root 17 -5 0 0 0 S 0 0.0 0:00.00 aio/1 780 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata/0 782 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata/1 783 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata_aux 802 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 803 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 804 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 805 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 1013 root 10 -5 0 0 0 S 0 0.0 49:27.78 kjournald 1181 root 15 -4 16912 452 448 S 0 0.0 0:00.05 udevd 1544 root 14 -5 0 0 0 S 0 0.0 0:00.00 kpsmoused 1706 root 13 -5 0 0 0 S 0 0.0 0:00.00 kmirrord 1995 root 18 0 193m 3324 1688 S 0 0.1 8:52.77 rsyslogd 2031 root 15 0 48856 732 608 S 0 0.0 0:01.86 sshd 2071 root 25 0 17316 1072 1068 S 0 0.0 0:00.00 mysqld_safe 2108 mysql 15 0 320m 72m 4368 S 0 2.4 1923:25 mysqld 2109 root 18 0 3776 500 496 S 0 0.0 0:00.00 logger 2180 postgres 15 0 99504 3016 2880 S 0 0.1 1:24.15 postgres 2184 postgres 15 0 99504 3596 3420 S 0 0.1 0:02.08 postgres 2185 postgres 15 0 99504 696 628 S 0 0.0 0:00.65 postgres 2186 postgres 15 0 99640 892 648 S 0 0.0 0:01.18 postgres

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • Make exact mp4 (H264) format for uploading to youtube

    - by WHITECOLOR
    With ffmpeg I'm converting video from mp3 and picture to upload it to youtube. After upload, conversion fails. Reasons are unknown. I believe the problem is in format. By the way If I'm uploading file 5 minutes length, it fails if I upload 30 seconds of this file it succeeds. I have donwload mp4 file from youtube. Then I uploaded it, it is done very fast. So a nice solution would be to convert videos to the same format that is done by google. I got the following output by mpeg: ffmpeg version N-44264-g070b0e1 Copyright (c) 2000-2012 the FFmpeg developers built on Sep 7 2012 17:38:57 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 72.100 / 51. 72.100 libavcodec 54. 55.100 / 54. 55.100 libavformat 54. 25.105 / 54. 25.105 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 16.100 / 3. 16.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'youtubetrack0.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2012-10-02 22:58:57 Duration: 00:06:46.66, start: 0.000000, bitrate: 176 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu v420p, 450x360, 78 kb/s, 6 fps, 6 tbr, 12 tbn, 12 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 95 kb/s Metadata: creation_time : 2012-10-02 22:58:57 handler_name : IsoMedia File Produced by Google, 5-11-2011 Is it possible to construct ffmpeg parameters so that that would give the same format that google internally does? Is the information above sufficient? I couldn't construct needed params. For example I don't understand how to set tbn and what 95 kb/s mean in "Stream #0:1(und): Audio:". Now I just do: ffmpeg -i videoimage.jpg -i audio.mp3 video.mp4 Info I've got: ffmpeg version N-44998-gdf82454 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 2 2012 23:03:12 with gcc 4.7.1 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --en able-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger - -enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc -- enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab le-libxavs --enable-libxvid --enable-zlib libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.25.105 Duration: 00:06:46.81, start: 0.000000, bitrate: 129 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 450x360, 3392 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 127 kb/s Metadata: handler_name : SoundHandler This video fails the conversion on youtube. I also tried to use other vcode parmam and extensions of output file (mp4, wmv, avi) but failed too. Would be greatful for help.

    Read the article

  • Problem with hadoop start-dfs.sh

    - by user288501
    I installed and configured hadoop on my Ubuntu 14.04 server, virtualized inside of hyper-v, however I am getting an issue when i run start-dfs.sh root@sUbuntu01:/var/log# start-dfs.sh 14/06/04 15:27:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. localhost] sed: -e expression #1, char 6: unknown option to `s' -c: Unknown cipher type 'cd' localhost: Ubuntu 14.04 LTS localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-sUbuntu01.out noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known '-z: ssh: Could not resolve hostname '-z: Name or service not known 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known disabled: ssh: Could not resolve hostname disabled: Name or service not known with: ssh: Could not resolve hostname with: Name or service not known have: ssh: Could not resolve hostname have: Name or service not known VM: ssh: Could not resolve hostname vm: Name or service not known stack: ssh: Could not resolve hostname stack: Name or service not known guard: ssh: Could not resolve hostname guard: Name or service not known fix: ssh: Could not resolve hostname fix: Name or service not known VM: ssh: Could not resolve hostname vm: Name or service not known the: ssh: Could not resolve hostname the: Name or service not known to: ssh: Could not resolve hostname to: Name or service not known warning:: ssh: Could not resolve hostname warning:: Name or service not known it: ssh: Could not resolve hostname it: Name or service not known now.: ssh: Could not resolve hostname now.: Name or service not known library: ssh: Could not resolve hostname library: Name or service not known will: ssh: Could not resolve hostname will: Name or service not known link: ssh: Could not resolve hostname link: Name or service not known or: ssh: Could not resolve hostname or: Name or service not known It's: ssh: Could not resolve hostname it's: Name or service not known <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known which: ssh: connect to host which port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out you: ssh: connect to host you port 22: Connection timed out try: ssh: connect to host try port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out highly: ssh: connect to host highly port 22: Connection timed out might: ssh: connect to host might port 22: Connection timed out loaded: ssh: connect to host loaded port 22: Connection timed out You: ssh: connect to host you port 22: Connection timed out guard.: ssh: connect to host guard. port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out Server: ssh: connect to host server port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out The: ssh: connect to host the port 22: Connection timed out recommended: ssh: connect to host recommended port 22: Connection timed out that: ssh: connect to host that port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out OpenJDK: ssh: connect to host openjdk port 22: Connection timed out 64-Bit: ssh: connect to host 64-bit port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out localhost: Ubuntu 14.04 LTS localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-sUbuntu01.out localhost: OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Starting secondary namenodes [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 0.0.0.0] sed: -e expression #1, char 6: unknown option to `s' warning:: ssh: Could not resolve hostname warning:: Name or service not known -c: Unknown cipher type 'cd' It's: ssh: Could not resolve hostname it's: Name or service not known 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known '-z: ssh: Could not resolve hostname '-z: Name or service not known 0.0.0.0: Ubuntu 14.04 LTS 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-sUbuntu01.out 0.0.0.0: OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. 0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known link: ssh: Could not resolve hostname link: No address associated with hostname it: ssh: Could not resolve hostname it: No address associated with hostname to: ssh: connect to host to port 22: Connection timed out or: ssh: connect to host or port 22: Connection timed out you: ssh: connect to host you port 22: Connection timed out guard.: ssh: connect to host guard. port 22: Connection timed out VM: ssh: connect to host vm port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out Server: ssh: connect to host server port 22: Connection timed out might: ssh: connect to host might port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out You: ssh: connect to host you port 22: Connection timed out now.: ssh: connect to host now. port 22: Connection timed out disabled: ssh: connect to host disabled port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out will: ssh: connect to host will port 22: Connection timed out The: ssh: connect to host the port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out try: ssh: connect to host try port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out guard: ssh: connect to host guard port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out recommended: ssh: connect to host recommended port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out 64-Bit: ssh: connect to host 64-bit port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out which: ssh: connect to host which port 22: Connection timed out VM: ssh: connect to host vm port 22: Connection timed out OpenJDK: ssh: connect to host openjdk port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out highly: ssh: connect to host highly port 22: Connection timed out that: ssh: connect to host that port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out loaded: ssh: connect to host loaded port 22: Connection timed out 14/06/04 15:36:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Any advice?

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • No root file system is defined error after installation

    - by LearnCode
    I installed ubuntu through Wubi and once i rebooted I get no root file system defined error. here's the output of the boot_info_script.Could anyone point me out where the error is. Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe /ntldr /ntdetect.com /wubildr /ubuntu/winboot/wubildr /wubildr.mbr /ubuntu/winboot/wubildr.mbr /ubuntu/disks/root.disk /ubuntu/disks/swap.disk sda1/Wubi: _____________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' sda2: __________________________________________________________________________ File system: vfat Boot sector type: Unknown Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /boot.ini /ntldr /NTDETECT.COM sdb1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 301,250,879 301,250,817 7 NTFS / exFAT / HPFS /dev/sda2 301,250,943 312,575,759 11,324,817 c W95 FAT32 (LBA) GUID Partition Table detected, but does not seem to be used. Partition Start Sector End Sector # of Sectors System /dev/sda1 323,465,741,313,502,988275,962,973,585-323,465,465,350,529,402 - /dev/sda2 242,728,591,638,290,720578,721,383,108,845,578335,992,791,470,554,859 - /dev/sda3 1,827,498,311,425,204,2562,091,935,274,843,009,907264,436,963,417,805,652 - /dev/sda4 579,711,218,081,401,3572,006,665,459,744,645,1521,426,954,241,663,243,796 - /dev/sda11 270,286,346,402,038,1183,786,543,326,404,525,9543,516,256,980,002,487,837 - /dev/sda12 4,179,681,002,230,769,6684,179,389,374,010,033,387-291,628,220,736,280 - /dev/sda13 232,556,480,979,456,1311,160,152,593,793,119,235927,596,112,813,663,105 - /dev/sda14 98,342,784,050,266,9183,691,264,578,843,725,1953,592,921,794,793,458,278 - /dev/sda15 2,307,845,219,957,882,4961,850,841,032,955,276,350-457,004,187,002,606,145 - /dev/sda16 512,592,046,878,946,497368,458,231,024,779,444-144,133,815,854,167,052 - /dev/sda17 2,504,135,232,870,384,3923,665,087,872,719,320,8291,160,952,639,848,936,438 - /dev/sda18 3,783,181,605,270,691,304122,034,509,624,708,942-3,661,147,095,645,982,361 - /dev/sda19 3,519,661,520,275,829,5122,376,243,094,723,723,587-1,143,418,425,552,105,924 - /dev/sda20 3,867,920,076,859,0744,494,691,111,933,625,1044,490,823,191,856,766,031 - /dev/sda21 1,500,144,061,909,253,7612,511,182,033,846,676,3401,011,037,971,937,422,580 - /dev/sda22 13,035,625,499,900,0062,360,168,613,941,394,9472,347,132,988,441,494,942 - /dev/sda23 4,228,978,682,068,599,48813,159,423,631,648,263-4,215,819,258,436,951,224 - /dev/sda24 3,695,955,742,872,046,9084,561,928,726,501,845,776865,972,983,629,798,869 - /dev/sda25 1,297,460,286,683,948,0461,444,350,486,339,417,957146,890,199,655,469,912 - /dev/sda26 1,228,858,248,533,131,831 0-1,228,858,248,533,131,830 - /dev/sda121 3,189,184,846,146,487,1461,849,820,258,006,914,852-1,339,364,588,139,572,293 - /dev/sda122 1,226,215,547,991,800,578389,781,518,734,546,300-836,434,029,257,254,277 - /dev/sda123 3,851,660,168,574,583,4654,046,215,657,583,031,556194,555,489,008,448,092 - /dev/sda124 1,197,460,980,174,153,341699,103,965,005,093,246-498,357,015,169,060,094 - Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 750.2 GB, 750153367552 bytes 255 heads, 63 sectors/track, 91200 cylinders, total 1465143296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 1,465,143,295 1,465,141,248 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/loop0 iso9660 Ubuntu 11.04 amd64 /dev/loop1 squashfs /dev/sda1 E814B55B14B52E06 ntfs /dev/sda2 01CD-023B vfat HP_RECOVERY /dev/sdb1 7836F22A36F1E8D0 ntfs Elements ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /cdrom iso9660 (ro,noatime) /dev/loop1 /rofs squashfs (ro,noatime) /dev/sdb1 /mnt fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) ================================ sda2/boot.ini: ================================ -------------------------------------------------------------------------------- [boot loader] timeout=0 default=C:\CMDCONS\BOOTSECT.DAT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /fastdetect C:\CMDCONS\BOOTSECT.DAT="Microsoft Windows Recovery Console" /cmdcons -------------------------------------------------------------------------------- ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown GPT Partiton Type c104043000e9b9040dff24b580010100 Unknown GPT Partiton Type 46313020746f20737461727420746865 Unknown GPT Partiton Type 65727920706172746974696f6e207761 Unknown GPT Partiton Type 727920706172746974696f6e0d0a0000 Unknown GPT Partiton Type 000f84e5f7668b162404e82804744066 Unknown GPT Partiton Type ce01e8dc038bfe66391624047505e8d9 Unknown GPT Partiton Type 0345086603f0e881030bd2740333d240 Unknown GPT Partiton Type bece01e8db0287fec645041266895508 Unknown GPT Partiton Type 01f60634010175078b363b01e854f5e8 Unknown GPT Partiton Type 313825740ffec03865107408fec03824 Unknown GPT Partiton Type 02f60634014074088bfdbece01e85101 Unknown GPT Partiton Type 263401f9e894f30f858ef4e8e201e8ec Unknown GPT Partiton Type f7e960f35245434f5645525966606633 Unknown GPT Partiton Type 660faf1e00106603dac3668b0e001066 Unknown GPT Partiton Type 8bfd386d04740583c710e2f6c36660c6 Unknown GPT Partiton Type 04ebf132c0b91000f3aac3bf0c04ebf3 Unknown GPT Partiton Type 02662bc1660fb71e0e02662bc366031e Unknown GPT Partiton Type f4b40ebb0700b901003c08751381ff25 Unknown GPT Partiton Type 534f465448494e4b90653f62011b0100 Unknown GPT Partiton Type 0b050900027777772e68702e636f6d00 Unknown GPT Partiton Type d441a0f5030003000ecb744a08bb3746 Unknown GPT Partiton Type f8579a116b4a7aa931cde97a4b9b5c09 Unknown GPT Partiton Type 7229990415b77c0a1970e7e824237a3a Unknown GPT Partiton Type afb6e34d6b4bd8c7c0eada19a9786cc3 Unknown BootLoader on sda1/Wubi 00000000 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 |0000000000000000| * 00000200 Unknown BootLoader on sda2 00000000 e9 a7 00 52 45 43 4f 56 45 52 59 00 02 08 20 00 |...RECOVERY... .| 00000010 02 00 00 00 00 f8 00 00 3f 00 f0 00 7f b9 f4 11 |........?.......| 00000020 8c cd ac 00 1e 2b 00 00 00 00 00 00 02 00 00 00 |.....+..........| 00000030 01 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000040 80 00 29 3b 02 cd 01 20 20 20 20 20 20 20 20 20 |..);... | 00000050 20 20 46 41 54 33 32 20 20 20 8b d0 c1 e2 02 80 | FAT32 ......| 00000060 e6 01 66 c1 e8 07 66 3b 46 f8 74 2a 66 89 46 f8 |..f...f;F.t*f.F.| 00000070 66 03 46 f4 66 0f b6 5e 28 80 e3 0f 74 0f 3a 5e |f.F.f..^(...t.:^| 00000080 10 0f 83 90 00 66 0f af 5e 24 66 03 c3 bb e0 07 |.....f..^$f.....| 00000090 b9 01 00 e8 cf 00 8b da 66 8b 87 00 7e 66 25 ff |........f...~f%.| 000000a0 ff ff 0f 66 3d f8 ff ff 0f c3 33 c9 8e d9 8e c1 |...f=.....3.....| 000000b0 8e d1 66 bc f4 7b 00 00 bd 00 7c 66 0f b6 46 10 |..f..{....|f..F.| 000000c0 66 f7 66 24 66 0f b7 56 0e 66 03 56 1c 66 89 56 |f.f$f..V.f.V.f.V| 000000d0 f4 66 03 c2 66 89 46 fc 66 c7 46 f8 ff ff ff ff |.f..f.F.f.F.....| 000000e0 66 8b 46 2c 66 50 e8 af 00 bb 70 00 b9 01 00 e8 |f.F,fP....p.....| 000000f0 73 00 bf 00 07 b1 0b be a9 7d f3 a6 74 2a 03 f9 |s........}..t*..| 00000100 83 c7 15 81 ff 00 09 72 ec 66 40 4a 75 db 66 58 |[email protected]| 00000110 e8 47 ff 72 cf be b4 7d ac 84 c0 74 09 b4 0e bb |.G.r...}...t....| 00000120 07 00 cd 10 eb f2 cd 19 66 58 ff 75 09 ff 75 0f |........fX.u..u.| 00000130 66 58 bb 00 20 66 83 f8 02 72 da 66 3d f8 ff ff |fX.. f...r.f=...| 00000140 0f 73 d2 66 50 e8 50 00 0f b6 4e 0d e8 16 00 c1 |.s.fP.P...N.....| 00000150 e1 05 03 d9 66 58 53 e8 00 ff 5b 72 d8 8a 56 40 |....fXS...[r..V@| 00000160 ea 00 00 00 20 66 60 66 6a 00 66 50 53 6a 00 66 |.... f`fj.fPSj.f| 00000170 68 10 00 01 00 8b f4 b8 00 42 8a 56 40 cd 13 be |h........B.V@...| 00000180 c7 7d 72 94 67 83 44 24 06 20 66 67 ff 44 24 08 |.}r.g.D$. fg.D$.| 00000190 e2 e3 83 c4 10 66 61 c3 66 48 66 48 66 0f b6 56 |.....fa.fHfHf..V| 000001a0 0d 66 f7 e2 66 03 46 fc c3 4e 54 4c 44 52 20 20 |.f..f.F..NTLDR | 000001b0 20 20 20 20 0d 0a 4e 6f 20 53 79 73 74 65 6d 20 | ..No System | 000001c0 44 69 73 6b 20 6f 72 0d 0a 44 69 73 6b 20 49 2f |Disk or..Disk I/| 000001d0 4f 20 65 72 72 6f 72 0d 0a 50 72 65 73 73 20 61 |O error..Press a| 000001e0 20 6b 65 79 20 74 6f 20 72 65 73 74 61 72 74 0d | key to restart.| 000001f0 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== umount: /isodevice: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1))

    Read the article

  • I have UFW block messages from local network machines, how can I analyse if they are malicious?

    - by Trygve
    I'm getting a lot of messages in my UFW log, and I'm trying to figure out if these are malicious or just normal. A UDP broadcast is coming from a windows laptop x.x.x.191, and some from our synology disks x.x.x.{6,8,10,11}. I have not figured out which macine 114 is yet. I would appreciate some advice in how to read the log, and get the most I can out of these calls. Oct 18 17:03:34 <myusername> kernel: [ 4034.755221] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:06:e8:19:08:00 SRC=x.x.x.6 DST=x.x.x.169 LEN=364 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=47978 LEN=344 Oct 18 17:03:34 <myusername> kernel: [ 4034.755292] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:1b:e8:8f:08:00 SRC=x.x.x.10 DST=x.x.x.169 LEN=366 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=47978 LEN=346 Oct 18 17:03:34 <myusername> kernel: [ 4034.756444] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:c0:c1:c0:52:18:ea:08:00 SRC=x.x.x.8 DST=x.x.x.169 LEN=294 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=47978 LEN=274 Oct 18 17:03:34 <myusername> kernel: [ 4034.756613] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:c0:c1:c0:52:18:ea:08:00 SRC=x.x.x.8 DST=x.x.x.169 LEN=306 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=47978 LEN=286 Oct 18 17:03:34 <myusername> kernel: [ 4034.760416] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:1e:6a:33:08:00 SRC=x.x.x.11 DST=x.x.x.169 LEN=366 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=47978 LEN=346 Oct 18 17:03:36 <myusername> kernel: [ 4036.215134] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=x.x.x.169 LEN=424 TOS=0x00 PREC=0x00 TTL=128 ID=11155 PROTO=UDP SPT=1900 DPT=47978 LEN=404 Oct 18 17:04:23 <myusername> kernel: [ 4083.853710] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=652 TOS=0x00 PREC=0x00 TTL=1 ID=11247 PROTO=UDP SPT=58930 DPT=3702 LEN=632 Oct 18 17:04:24 <myusername> kernel: [ 4084.063153] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=652 TOS=0x00 PREC=0x00 TTL=1 ID=11299 PROTO=UDP SPT=58930 DPT=3702 LEN=632 Oct 18 17:07:02 <myusername> kernel: [ 4242.153947] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=18702 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:07:02 <myusername> kernel: [ 4242.275788] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=18703 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:12:29 <myusername> kernel: [ 4569.073815] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=30102 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:12:29 <myusername> kernel: [ 4569.242740] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=30103 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:17:02 <myusername> kernel: [ 4841.440729] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=9195 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:17:02 <myusername> kernel: [ 4841.553211] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=239.255.255.250 LEN=680 TOS=0x00 PREC=0x00 TTL=1 ID=9196 PROTO=UDP SPT=58930 DPT=3702 LEN=660 Oct 18 17:19:10 <myusername> kernel: [ 4969.294709] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:25:36:26:02:86:08:00 SRC=x.x.x.114 DST=239.255.255.250 LEN=923 TOS=0x00 PREC=0x00 TTL=1 ID=27103 PROTO=UDP SPT=3702 DPT=3702 LEN=903 Oct 18 17:19:10 <myusername> kernel: [ 4969.314553] [UFW BLOCK] IN=eth0 OUT= MAC=01:00:5e:7f:ff:fa:00:25:36:26:02:86:08:00 SRC=x.x.x.114 DST=239.255.255.250 LEN=923 TOS=0x00 PREC=0x00 TTL=1 ID=27104 PROTO=UDP SPT=3702 DPT=3702 LEN=903 Oct 18 17:33:34 <myusername> kernel: [ 5832.431610] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:1b:e8:8f:08:00 SRC=x.x.x.10 DST=x.x.x.169 LEN=366 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=55281 LEN=346 Oct 18 17:33:34 <myusername> kernel: [ 5832.431659] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:06:e8:19:08:00 SRC=x.x.x.6 DST=x.x.x.169 LEN=364 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=55281 LEN=344 Oct 18 17:33:34 <myusername> kernel: [ 5832.431865] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:11:32:1e:6a:33:08:00 SRC=x.x.x.11 DST=x.x.x.169 LEN=366 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=55281 LEN=346 Oct 18 17:33:34 <myusername> kernel: [ 5832.433024] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:c0:c1:c0:52:18:ea:08:00 SRC=x.x.x.8 DST=x.x.x.169 LEN=294 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=55281 LEN=274 Oct 18 17:33:34 <myusername> kernel: [ 5832.433224] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:c0:c1:c0:52:18:ea:08:00 SRC=x.x.x.8 DST=x.x.x.169 LEN=306 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=1900 DPT=55281 LEN=286 Oct 18 17:33:37 <myusername> kernel: [ 5834.914484] [UFW BLOCK] IN=eth0 OUT= MAC=f0:de:f1:71:c3:2e:00:22:19:de:80:a4:08:00 SRC=x.x.x.191 DST=x.x.x.169 LEN=424 TOS=0x00 PREC=0x00 TTL=128 ID=10075 PROTO=UDP SPT=1900 DPT=55281 LEN=404

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • Kernel oops on Linux running in VirtualBox breaks some IO-related functionality on the server

    - by Kristoffer E
    We are having problems with CentOS release 6.3 running in VirtualBox on Windows 7 machines. The symptoms are the following: Everything works as normal for several hours, even days. Then something happens which breaks the system. What we still can do after this something happens: Access the web server Use existing SSH sessions to run top and free What does not work: Starting new SSH sessions (hangs after username and password is entered) Running ls in existing SSH sessions (hangs) SSI includes from our web servers that fetch data from remote machines probably more What we see on the server when this something happens is the following: Load average go from basically nothing to around 3 CPU usage is still low (5%) Disk activity is low (running iostat) Plenty of memory available Plenty of disk space available In /var/log/messages we get the following: Jun 14 01:10:48 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:48 devvm kernel: Tx Queue <0> Jun 14 01:10:48 devvm kernel: TDH <2e> Jun 14 01:10:48 devvm kernel: TDT <30> Jun 14 01:10:48 devvm kernel: next_to_use <30> Jun 14 01:10:48 devvm kernel: next_to_clean <2e> Jun 14 01:10:48 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:48 devvm kernel: time_stamp <1038284db> Jun 14 01:10:48 devvm kernel: next_to_watch <2f> Jun 14 01:10:48 devvm kernel: jiffies <103828b42> Jun 14 01:10:48 devvm kernel: next_to_watch.status <0> Jun 14 01:10:50 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:50 devvm kernel: Tx Queue <0> Jun 14 01:10:50 devvm kernel: TDH <2e> Jun 14 01:10:50 devvm kernel: TDT <30> Jun 14 01:10:50 devvm kernel: next_to_use <30> Jun 14 01:10:50 devvm kernel: next_to_clean <2e> Jun 14 01:10:50 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:50 devvm kernel: time_stamp <1038284db> Jun 14 01:10:50 devvm kernel: next_to_watch <2f> Jun 14 01:10:50 devvm kernel: jiffies <103829312> Jun 14 01:10:50 devvm kernel: next_to_watch.status <0> Jun 14 01:10:52 devvm kernel: ------------[ cut here ]------------ Jun 14 01:10:52 devvm kernel: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Jun 14 01:10:52 devvm kernel: Hardware name: VirtualBox Jun 14 01:10:52 devvm kernel: NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Jun 14 01:10:52 devvm kernel: Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Jun 14 01:10:52 devvm kernel: Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Jun 14 01:10:52 devvm kernel: Call Trace: Jun 14 01:10:52 devvm kernel: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 Jun 14 01:10:52 devvm kernel: [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 Jun 14 01:10:52 devvm kernel: [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 Jun 14 01:10:52 devvm kernel: [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 Jun 14 01:10:52 devvm kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 Jun 14 01:10:52 devvm kernel: [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b Jun 14 01:10:52 devvm kernel: [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 Jun 14 01:10:52 devvm kernel: <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 Jun 14 01:10:52 devvm kernel: [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 Jun 14 01:10:52 devvm kernel: [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff814e433a>] ? rest_init+0x7a/0x80 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 Jun 14 01:10:52 devvm kernel: [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 Jun 14 01:10:52 devvm kernel: ---[ end trace 2c7bb984812cf120 ]--- Jun 14 01:10:52 devvm kernel: e1000 0000:00:03.0: eth0: Reset adapter Jun 14 01:10:53 devvm abrtd: Directory 'oops-2013-06-14-01:10:53-1537-0' creation detected Jun 14 01:10:53 devvm abrt-dump-oops: Reported 1 kernel oopses to Abrt Jun 14 01:10:53 devvm abrtd: Can't open file '/var/spool/abrt/oops-2013-06-14-01:10:53-1537-0/uid': No such file or directory Jun 14 01:10:55 devvm kernel: Bridge firewalling registered After this we see for a while, every two minutes: Jun 14 01:14:22 devvm kernel: INFO: task events/0:19 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: events/0 D 0000000000000000 0 19 2 0x00000000 Jun 14 01:14:22 devvm kernel: ffff880116c4fb90 0000000000000046 00000000ffffffff 0000000000000008 Jun 14 01:14:22 devvm kernel: 0000000000016680 0000000000016680 ffff880028210400 0000000000016680 Jun 14 01:14:22 devvm kernel: ffff880116c4daf8 ffff880116c4ffd8 000000000000fb88 ffff880116c4daf8 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff8105b483>] ? perf_event_task_sched_out+0x33/0x80 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100975d>] ? __switch_to+0x13d/0x320 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d093>] __cancel_work_timer+0x1b3/0x1e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d0f0>] cancel_work_sync+0x10/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffffa01c5ca5>] e1000_down_and_stop+0x25/0x50 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cb695>] e1000_down+0x155/0x200 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbcb0>] ? e1000_reset_task+0x0/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbd1e>] e1000_reset_task+0x6e/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffff8108c760>] worker_thread+0x170/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff810920d0>] ? autoremove_wake_function+0x0/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8108c5f0>] ? worker_thread+0x0/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff81091d66>] kthread+0x96/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c14a>] child_rip+0xa/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff81091cd0>] ? kthread+0x0/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 Jun 14 01:14:22 devvm kernel: INFO: task parted:8069 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: parted D 0000000000000003 0 8069 7994 0x00000080 Jun 14 01:14:22 devvm kernel: ffff8800908b3bb8 0000000000000082 0000000000000000 ffff88010ab50080 Jun 14 01:14:22 devvm kernel: ffff880116c7d500 0000000000000001 0000000000000000 0000000000000000 Jun 14 01:14:22 devvm kernel: ffff88010ab50638 ffff8800908b3fd8 000000000000fb88 ffff88010ab50638 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8112b6d0>] ? lru_add_drain_per_cpu+0x0/0x10 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d177>] flush_work+0x77/0xc0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d2f3>] schedule_on_each_cpu+0x133/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff811ad440>] ? invalidate_bh_lru+0x0/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8112ae35>] lru_add_drain_all+0x15/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff811adf6a>] invalidate_bdev+0x2a/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8125e9a4>] blkdev_ioctl+0x3b4/0x6e0 Jun 14 01:14:22 devvm kernel: [<ffffffff811b381c>] block_ioctl+0x3c/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8118dec2>] vfs_ioctl+0x22/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e064>] do_vfs_ioctl+0x84/0x580 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e5e1>] sys_ioctl+0x81/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b In /var/spool/abrt/oops-2013-06-14-01:10:53-1537-0 we can see the following information: In backtrace: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Hardware name: VirtualBox NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Call Trace: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 [<ffffffff814e433a>] ? rest_init+0x7a/0x80 [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 In cmdline: ro root=/dev/mapper/vg_01-lv_root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD SYSFONT=latarcyrheb-sun16 rd_LVM_LV=vg_01/lv_root crashkernel=129M@0M rhgb quiet rd_LVM_LV=vg_01/lv_swap rd_NO_DM rhgb quie Additional information: # uname -a Linux devvm 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release CentOS release 6.3 (Final) VirtualBox version 4.2.6. Any insight in how we can proceed with troubleshooting this is appreciated. If you need more information, just let me know.

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What I've Gained from being a Presenter at Tech Events

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2014/06/12/what-ive-gained-from-being-a-presenter-at-tech-events.aspxI know I fail at blogging lately.  As I've said before life happens and it gets in the way of best laid out plans.  I thought about creating some type of watch with some app that basically dictates using dragon naturally speaking to Wordpress, but alas no time and the write processing capabilities just don't exist yet.So to get to my point Alison Gianotto created this blog post: http://www.snipe.net/2014/06/why-you-should-stop-stalling-and-start-presenting/.  I like the message she has stated in this post and I want to share something personal.It was around 2007 that I was seriously looking into leaving the technology field altogether and going back to school.  I was calling places like Washington University in Saint Louis and University of Missouri Kansas City asking them how I could go about getting into some type of post graduate medical school program.  My entire high school career was based on Medical Explorers and somehow becoming a doctor.  I did not want to take my hobby and continue using it as a career mechanism.  I was unhappy, but I didn't realize why I was unhappy at the time.  It was really a lot of bad things involving the lack of self confidence and self esteem.  Overall I was not in a good place and it took me until 2011 to realize that I still was not in a good place in life.So in about April 2007 or so I started this blog that you guys have been reading or occasionally read.  I kind of started passively stalking people by reading their blogs in the SharePoint and .Net communities.  I also started listening to .Net Rocks & watching videos on their corresponding training for SharePoint, WCF, WF, and a bunch of other technologies.  I wanted more knowledge, so someone suggested I go to a user group.  I've told this story before about how I met Jeff Julian & John Alexander, so that point I will spare you the details.  You know how I got to my first user group presentation and how I started getting involved with events, so I'll also spare those details.The point I want to touch on is that I went out I started speaking and that path I took helped me gain the self confidence and self respect I needed.  When I first moved to NYC I couldn't even ride a subway by myself or walk alone without getting lost.  Now I feel like I can go out and solve any problem someone throws at me.  So you see what Alison states in her blog post is true and I am a great example to that point.  I stood in front of 800 or so people at SharePoint Conference in 2011 and spoke about a topic.  In 2007 I would have hidden or stuttered the entire time.  I have now spoken at over 70 events and user groups.  I am a top 25 influencer in my technology.  I was a most valued professional for years in a row in Microsoft SharePoint.  People are constantly trying to gain my time, so that they can pick my brain for solutions and other life problems.  I went from maybe five or six friends to over hundreds of friends in various cities across the globe.  I'm not saying it's an instant fame and it doesn't take a ton of work, but I have never looked back once at my life and regretted the choice I made in 2007.  It has lead me to a lot of other things in my life, including more positivity and happiness.  If anyone ever wants to contact me and pick my brain on a presentation go ahead.  If you want me to help you find the best meetup that suits you for that presentation I can try to help too (I might be a little more helpful in the Microsoft or iOS arenas though).  The best thing I can state is don't be scared just do it.  If you need an audience I can try to pencil it in my schedule.  I can't promise anything, but if you are in NYC I can at least try.

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • fresh installation of PGF/TikZ crashes, why?

    - by Vincenzo
    I have a clean CentOS 5.5 machine with tetex installed. Next, I installed PGF/TikZ: wget http://media.texample.net/pgf/builds/pgfCVS2010-06-02_TDS.zip unzip pgfCVS2010-06-02_TDS.zip \cp -r tex /usr/share/texmf texhash I'm trying to compile a simple document and this is what I'm getting: $ latex test.tex This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (./test.tex LaTeX2e <2003/12/01> .. skipped .. (/usr/share/texmf/tex/latex/pgf/frontendlayer/tikz.sty (/usr/share/texmf/tex/latex/pgf/pgf.sty (/usr/share/texmf/tex/latex/graphics/graphicx.sty (/usr/share/texmf/tex/latex/graphics/graphics.sty (/usr/share/texmf/tex/latex/graphics/trig.sty) (/usr/share/texmf/tex/latex/graphics/graphics.cfg)))) (/usr/share/texmf/tex/latex/pgf/utilities/pgffor.sty (/usr/share/texmf/tex/latex/pgf/utilities/pgfrcs.sty (/usr/share/texmf/tex/generic/pgf/utilities/pgfutil-common.tex) (/usr/share/texmf/tex/generic/pgf/utilities/pgfutil-latex.def) (/usr/share/texmf/tex/generic/pgf/utilities/pgfrcs.code.tex)) (/usr/share/texmf/tex/latex/pgf/utilities/pgfkeys.sty (/usr/share/texmf/tex/generic/pgf/utilities/pgfkeys.code.tex (/usr/share/texmf/tex/generic/pgf/utilities/pgfkeysfiltered.code.tex))) (/usr/share/texmf/tex/generic/pgf/utilities/pgffor.code.tex)) (/usr/share/texmf/tex/generic/pgf/frontendlayer/tikz/tikz.code.tex (/usr/share/texmf/tex/generic/pgf/libraries/pgflibraryplothandlers.code.tex ! Undefined control sequence. \pgfsetplottension ...ttension {\pgf@sys@tonumber \pgf@x } l.104 \pgfsetplottension{0.5} ? I failed to find any clues in the net about this problem. On other servers I don't such a problem. Could anyone help please? Thanks! ps. Btw, I tried another build of PGF/TikZ, the older one, no luck :(

    Read the article

  • Extracting comment url from wordpress function

    - by Pavel
    Hi everyone. I'm developing some ajax script and using wordpress and my question is: is there a way to extract a comment url from a wordpress function somehow? The function I'm using in the loop looks like that: <?php comments_popup_link('Discuss &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> <?php edit_post_link('Edit', '| ', ''); ?> And the HTML output of that looks like this: <a href="http://www.somepage.com/staging/2010/06/15/sadfasfregw/#respond" title="Comment on sadfasfregw"><span class="dsq-postid-17546">View Comments</span></a>| <a class="post-edit-link" href="http://www.factmag.com/staging/wp-admin/post.php?action=edit&post=17546" title="Edit post">Edit</a> However, I'm only interested in src (http://www.somepage.com/staging/2010/06/15/sadfasfregw/#respond). Is there a way to get it from there and then use it in later reference? Does some kind of function or anything like that exists in wordpress? Many thanks in advance for any responses!

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • How to tell if mod_deflate is actually working?

    - by Jayraj
    I have put the following in my http.conf file: # mod_deflate configuration <IfModule mod_deflate.c> # Restrict compression to these MIME types AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/x-javascript AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE text/css # Level of compression (Highest 9 - Lowest 1) DeflateCompressionLevel 9 # Netscape 4.x has some problems. BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems BrowserMatch ^Mozilla/4\.0[678] no-gzip # MSIE masquerades as Netscape, but it is fine BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html <IfModule mod_headers.c> # Make sure proxies don't deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule> My content doesn't return with a Content-Encoding of type gzip, but I find myself getting a lot more 304s and the Etag is appended with a +gzip suffix. Is mod_deflate actually doing its job? (Sorry about the n00b-ishness)

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • automysqlbackup not working weirdly on shared host

    - by KPL
    I'm on HostGator and I have not-root-level SSH access. I manually setup automysqlbackup in a /home/user/automysqlbackup folder. Created a file called runbackup in the same folder, chmod +x'ed it. The content : /home/user/automysqlbackup/automysqlbackup /home/user/automysqlbackup/myserver.conf Now, when I run ./runbackup from shell, no email is sent. I've setup a daily cron job. The crontab line reads - 24 06 * * * /home/user/automysqlbackup/runbackup When the crontab is run, I do get an email, but, the subject is WARNING: Error Reported - MySQL Backup Log and SQL Files for localhost - 012-03-19_06h24m The email does have an attachment, but it just has the SQL for creating tables. No data in the several rows at all. I don't know what wrong I'm doing and this is freaking me out! I tried writing a custom script as well, using this guide, but mutt doesn't send email, then. The OS used by HostGator is CentOS.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >