Search Results

Search found 33898 results on 1356 pages for '11 10'.

Page 395/1356 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • Can't communicate between lan ports on openwrt router

    - by ScaryAardvark
    I've got a WBMR-HP-G300H Buffalo Airstation router on which I've installed the lates OpenWRT software. All is working well (ADSL, WIFI etc) except for one niggle. I can't communicate between lan ports. i.e. if I have one computer connected on lan port 1 and I try to ping another computer on lan port 2 then I get "destination unreachable". I can ping both computers from the router itself and can also ping each computer from a seperate laptop connected wirelessly. All computers are in the same subnet range (10.0.0.?/24). I suspect that I may need to configure a vlan on the switch but everytime I try and do this with various google'ed configuration I keep freezing out all lan-ports and I have to revert back using a wirelessly connected laptop. Here's my /etc/config/network: config interface 'loopback' option ifname 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config interface 'lan' option type 'bridge' option proto 'static' option netmask '255.255.255.0' option ipaddr '10.0.0.1' option _orig_ifname 'eth0 wlan0' option _orig_bridge 'true' option ifname 'eth0' config adsl-device 'adsl' option fwannex 'a' option annex 'a2p' config interface 'wan' option _orig_ifname 'nas0' option _orig_bridge 'false' option proto 'pppoa' option encaps 'vc' option atmdev '0' option vci '38' option vpi '0' option username '?????????????' option password '??????????????' Any help would be warmly received. Here's some more config stuff. root@OpenWrt:~# ifconfig -a br-lan Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:226576 errors:0 dropped:346 overruns:0 frame:0 TX packets:269292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:26771676 (25.5 MiB) TX bytes:183986450 (175.4 MiB) eth0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb0 Link encap:Ethernet HWaddr 36:60:EC:DF:13:A1 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb1 Link encap:Ethernet HWaddr 4A:7B:75:67:54:E0 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:780 errors:0 dropped:0 overruns:0 frame:0 TX packets:780 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:58369 (57.0 KiB) TX bytes:58369 (57.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 00-24-A5-BD-66-08-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2424 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:320188 (312.6 KiB) TX bytes:0 (0.0 B) pppoa-wan Link encap:Point-to-Point Protocol inet addr:81.136.179.204 P-t-P:81.134.80.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:258894 errors:0 dropped:0 overruns:0 frame:0 TX packets:212976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:177341656 (169.1 MiB) TX bytes:25192459 (24.0 MiB) wlan0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:204063 errors:0 dropped:0 overruns:0 frame:0 TX packets:245516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:26613140 (25.3 MiB) TX bytes:162799765 (155.2 MiB) root@OpenWrt:~# brctl show bridge name bridge id STP enabled interfaces br-lan 8000.0024a5bd6608 no wlan0 eth0 root@OpenWrt:~# swconfig dev eth0 show Global attributes: enable_vlan: 0 Port 0: pvid: 0 link: port:0 link:up speed:1000baseT full-duplex txflow rxflow Port 1: pvid: 0 link: port:1 link:down Port 2: pvid: 0 link: port:2 link:down Port 3: pvid: 0 link: port:3 link:down Port 4: pvid: 0 link: port:4 link:up speed:100baseT full-duplex txflow rxflow auto Port 5: pvid: 0 link: port:5 link:up speed:100baseT full-duplex txflow rxflow auto Regards Mark.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Debian dependency problems / partially installed

    - by Michael
    I tried to install curl support for php 5 on my debian squeeze machine and since I'm having problems. After trying to install curl I got dependency issues which I tried to solve by removing what started the issues. From one thing came another and I'm currently looking at ~29 issues when I try to do an apt-get upgrade. These issues vary from unable to config, dependency and unable to remove errors. I tried apt-get upgrade -f and installing packages using dpkg command. I tried removing using purge and force. I manually removed stuff to try and fix it. I tried running dpkg --configure -a. I've to say I'm still pretty new to linux so I'm out of idea's and cant seem to find an answer online that matches my problems. Here's a part of the apt-get upgrade command output: Reading package lists... Building dependency tree... Reading state information... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 29 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up libgeoip1 (1.4.7~beta6+dfsg-1) ... Bus error dpkg: error processing libgeoip1 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libisc62 (1:9.7.3.dfsg-1~squeeze3) ... Bus error dpkg: error processing libisc62 (--configure): subprocess installed post-installation script returned error exit status 135 dpkg: dependency problems prevent configuration of libdns69: libdns69 depends on libgeoip1 (>= 1.4.7~beta6+dfsg); however: Package libgeoip1 is not configured yet. libdns69 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libdns69 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccc60: libisccc60 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libisccc60 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccfg62: libisccfg62 depends on libdns69; however: Package libdns69 is not configured yet. .. continues Errors were encountered while processing: libgeoip1 libisc62 libdns69 libisccc60 libisccfg62 libbind9-60 liblwres60 bind9-host libavahi-core7 libdaemon0 avahi-daemon libexif12 libffi5 libgomp1 libgphoto2-port0 libgphoto2-2 libperl5.10 libsensors4 libsnmp15 libhpmud0 libieee1284-3 libnss-mdns libossp-uuid16 libpq5 libv4l-0 libsane libsane-hpaio libssh2-1 python-gobject dpkg --configure -a Setting up libpq5 (8.4.8-0squeeze2) ... Bus error dpkg: error processing libpq5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libperl5.10 (5.10.1-17squeeze2) ... Bus error dpkg: error processing libperl5.10 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libffi5 (3.0.9-3) ... Bus error dpkg: error processing libffi5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libexif12 (0.6.19-1) ... .. continues Suggestions are really welcome I really don't know what to do. Michael.

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Upload Certificate and Key to RUEI in order to decrypt SSL traffic

    - by stefan.thieme(at)oracle.com
    So you want to monitor encrypted traffic with your RUEI collector ?Actually this is an easy thing if you follow the lines below...I will start out with creating a pair of snakeoil (so called self-signed) certificate and key with the make-ssl-cert tool which comes pre-packaged with apache only for the purpose of this example.$ sudo make-ssl-cert generate-default-snakeoil$ sudo ls -l /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key-rw-r--r-- 1 root root     615 2010-06-07 10:03 /etc/ssl/certs/ssl-cert-snakeoil.pem-rw-r----- 1 root ssl-cert 891 2010-06-07 10:03 /etc/ssl/private/ssl-cert-snakeoil.keyRUEI Configuration of Security SSL Keys You will most likely get these two files from your Certificate Authority (CA) and/or your system administrators should be able to extract this from your WebServer or LoadBalancer handling SSL encryption for your infrastructure.Now let's look at the content of these two files, the certificate (apache assumes this is in PEM format) is called a public key and the private key is used by the apache server to encrypt traffic for a client using the certificate to initiate the SSL connection with the server.In case you already know that these two match, you simply have to paste them in one text file and upload this text file to your RUEI instance.$ sudo cat /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key > /tmp/ruei.cert_and_key$ sudo cat /tmp/ruei.cert_and_key -----BEGIN CERTIFICATE----- MIIBmTCCAQICCQD7O3XXwVilWzANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDEwZ1 YnVudHUwHhcNMTAwNjA3MDgwMzUzWhcNMjAwNjA0MDgwMzUzWjARMQ8wDQYDVQQD EwZ1YnVudHUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALbs+JnI+p+K7Iqa SQZdnYBxOpdRH0/9jt1QKvmH68v81h9+f1Z2rVR7Zrd/l+ruE3H9VvuzxMlKuMH7 qBX/gmjDZTlj9WJM+zc0tSk+e2udy9he20lGzTxv0vaykJkuKcvSWNk4WE9NuAdg IHZvjKgoTSVmvM1ApMCg69nyOy97AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAk2rv VEkxR1qPSpJiudDuGUHtWKBKWiWbmSwI3REZT+0vG+YDG5a55NdxgRk3zhQntqF7 gNYjKxblBByBpY7W0ci00kf7kFgvXWMeU96NSQJdnid/YxzQYn0dGL2rSh1dwdPN NPQlNSfnEQ1yxFevR7aRdCqTbTXU3mxi8YaSscE= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQC27PiZyPqfiuyKmkkGXZ2AcTqXUR9P/Y7dUCr5h+vL/NYffn9W dq1Ue2a3f5fq7hNx/Vb7s8TJSrjB+6gV/4Jow2U5Y/ViTPs3NLUpPntrncvYXttJ Rs08b9L2spCZLinL0ljZOFhPTbgHYCB2b4yoKE0lZrzNQKTAoOvZ8jsvewIDAQAB AoGBAJ7LCWeeUwnKNFqBYmD3RTFpmX4furnal3lBDX0945BZtJr0WZ/6N679zIYA aiVTdGfgjvDC9lHy3n3uctRd0Jqdh2QoSSxNBhq5elIApNIIYzu7w/XI/VhGcDlA b6uadURQEC2q+M8YYjw3mwR2omhCWlHIViOHe/9T8jfP/8pxAkEA7k39WRcQildH DFKcj7gurqlkElHysacMTFWf0ZDTEUS6bdkmNXwK6mH63BlmGLrYAP5AMgKgeDf8 D+WRfv8YKQJBAMSCQ7UGDN3ysyfIIrdc1RBEAk4BOrKHKtD5Ux0z5lcQkaCYrK8J DuSldreN2yOhS99/S4CRWmGkTj04wRSnjwMCQQCaR5mW3QzTU4/m1XEQxsBKSdZE 2hMSmsCmhuSyK13Kl0FPLr/C7qyuc4KSjksABa8kbXaoKfUz/6LLs+ePXZ2JAkAv +mIPk5+WnQgS4XFgdYDrzL8HTpOHPSs+BHG/goltnnT/0ebvgXWqa5+1pyPm6h29 PrYveM2pY1Va6z1xDowDAkEAttfzAwAHz+FUhWQCmOBpvBuW/KhYWKZTMpvxFMSY YD5PH6NNyLfBx0J4nGPN5n/f6il0s9pzt3ko++/eUtWSnQ== -----END RSA PRIVATE KEY----- Simply click on the add new key and browse for the cert_and_key file on your desktop which you concatenated earlier using any text editor. You may need to add a passphrase in order to decrypt the RSA key in some cases (it should tell you BEGIN ENCRYPTED PRIVATE KEY in the header line). I will show you the success screen after uploading the certificate to RUEI. You may want to restart your collector once you have uploaded all the certificate/key pairs you want to use in order to make sure they get picked up asap.You should be able to see the number of SSL Connections rising in the Collector statistics screen below. The figures for decrypt errors should slowly go down and the usage figures for your encryption algortihm on the subsequent SSL Encryption screen should go up. You should be 100% sure everything works fine by now, otherwise see below to distinguish the remaining 1% from your 99% certainty.Verify Certificate and Key are matchingYou can compare the modulus of private key and public certificate and they should match in order for the key to fit the lock. You only want to make sure they both fit each other.We are actually interested only in the following details of the two files, which can be determined by using the -subject, -dates and -modulus command line switches instead of the complete -text output of the x509 certificate/rsa key contents.$ sudo openssl x509 -noout -subject -in /etc/ssl/certs/ssl-cert-snakeoil.pemsubject= /CN=ubuntu$ sudo openssl x509 -noout -dates -in /etc/ssl/certs/ssl-cert-snakeoil.pemnotBefore=Jun  7 08:03:53 2010 GMTnotAfter=Jun  4 08:03:53 2020 GMT$ sudo openssl x509 -noout -modulus -in /etc/ssl/certs/ssl-cert-snakeoil.pem Modulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7B $ sudo openssl rsa -noout -modulus -in /etc/ssl/private/ssl-cert-snakeoil.keyModulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7BAs you can see the modulus matches exactly and we have the proof that the certificate has been created using the private key. OpenSSL Certificate and Key DetailsAs I already told you, you do not need all the greedy details, but in case you want to know it in depth what is actually in those hex-blocks can be made visible with the following commands which show you the actual content in a human readable format.Note: You may not want to post all the details of your private key =^) I told you I have been using a self-signed certificate only for showing you these details.$ sudo openssl rsa -noout -text -in /etc/ssl/private/ssl-cert-snakeoil.keyPrivate-Key: (1024 bit)modulus:    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:    a4:c0:a0:eb:d9:f2:3b:2f:7bpublicExponent: 65537 (0x10001)privateExponent:    00:9e:cb:09:67:9e:53:09:ca:34:5a:81:62:60:f7:    45:31:69:99:7e:1f:ba:b9:da:97:79:41:0d:7d:3d:    e3:90:59:b4:9a:f4:59:9f:fa:37:ae:fd:cc:86:00:    6a:25:53:74:67:e0:8e:f0:c2:f6:51:f2:de:7d:ee:    72:d4:5d:d0:9a:9d:87:64:28:49:2c:4d:06:1a:b9:    7a:52:00:a4:d2:08:63:3b:bb:c3:f5:c8:fd:58:46:    70:39:40:6f:ab:9a:75:44:50:10:2d:aa:f8:cf:18:    62:3c:37:9b:04:76:a2:68:42:5a:51:c8:56:23:87:    7b:ff:53:f2:37:cf:ff:ca:71prime1:    00:ee:4d:fd:59:17:10:8a:57:47:0c:52:9c:8f:b8:    2e:ae:a9:64:12:51:f2:b1:a7:0c:4c:55:9f:d1:90:    d3:11:44:ba:6d:d9:26:35:7c:0a:ea:61:fa:dc:19:    66:18:ba:d8:00:fe:40:32:02:a0:78:37:fc:0f:e5:    91:7e:ff:18:29prime2:    00:c4:82:43:b5:06:0c:dd:f2:b3:27:c8:22:b7:5c:    d5:10:44:02:4e:01:3a:b2:87:2a:d0:f9:53:1d:33:    e6:57:10:91:a0:98:ac:af:09:0e:e4:a5:76:b7:8d:    db:23:a1:4b:df:7f:4b:80:91:5a:61:a4:4e:3d:38:    c1:14:a7:8f:03exponent1:    00:9a:47:99:96:dd:0c:d3:53:8f:e6:d5:71:10:c6:    c0:4a:49:d6:44:da:13:12:9a:c0:a6:86:e4:b2:2b:    5d:ca:97:41:4f:2e:bf:c2:ee:ac:ae:73:82:92:8e:    4b:00:05:af:24:6d:76:a8:29:f5:33:ff:a2:cb:b3:    e7:8f:5d:9d:89exponent2:    2f:fa:62:0f:93:9f:96:9d:08:12:e1:71:60:75:80:    eb:cc:bf:07:4e:93:87:3d:2b:3e:04:71:bf:82:89:    6d:9e:74:ff:d1:e6:ef:81:75:aa:6b:9f:b5:a7:23:    e6:ea:1d:bd:3e:b6:2f:78:cd:a9:63:55:5a:eb:3d:    71:0e:8c:03coefficient:    00:b6:d7:f3:03:00:07:cf:e1:54:85:64:02:98:e0:    69:bc:1b:96:fc:a8:58:58:a6:53:32:9b:f1:14:c4:    98:60:3e:4f:1f:a3:4d:c8:b7:c1:c7:42:78:9c:63:    cd:e6:7f:df:ea:29:74:b3:da:73:b7:79:28:fb:ef:    de:52:d5:92:9d$ sudo openssl x509 -noout -text -in /etc/ssl/certs/ssl-cert-snakeoil.pemCertificate:    Data:        Version: 1 (0x0)        Serial Number:            fb:3b:75:d7:c1:58:a5:5b        Signature Algorithm: sha1WithRSAEncryption        Issuer: CN=ubuntu        Validity            Not Before: Jun  7 08:03:53 2010 GMT            Not After : Jun  4 08:03:53 2020 GMT        Subject: CN=ubuntu        Subject Public Key Info:            Public Key Algorithm: rsaEncryption            RSA Public Key: (1024 bit)                Modulus (1024 bit):                    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:                    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:                    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:                    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:                    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:                    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:                    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:                    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:                    a4:c0:a0:eb:d9:f2:3b:2f:7b                Exponent: 65537 (0x10001)    Signature Algorithm: sha1WithRSAEncryption        93:6a:ef:54:49:31:47:5a:8f:4a:92:62:b9:d0:ee:19:41:ed:        58:a0:4a:5a:25:9b:99:2c:08:dd:11:19:4f:ed:2f:1b:e6:03:        1b:96:b9:e4:d7:71:81:19:37:ce:14:27:b6:a1:7b:80:d6:23:        2b:16:e5:04:1c:81:a5:8e:d6:d1:c8:b4:d2:47:fb:90:58:2f:        5d:63:1e:53:de:8d:49:02:5d:9e:27:7f:63:1c:d0:62:7d:1d:        18:bd:ab:4a:1d:5d:c1:d3:cd:34:f4:25:35:27:e7:11:0d:72:        c4:57:af:47:b6:91:74:2a:93:6d:35:d4:de:6c:62:f1:86:92:        b1:c1The above output can also be seen if you direct your browser client to your website and check the certificate sent by the server to your browser. You will be able to lookup all the details including the validity dates, subject common name and the public key modulus.Capture an SSL connection using WiresharkAnd as you would have expected, looking at the low-level tcp data that has been exchanged between the client and server with a tcp-diagnostics tool (i.e. wireshark/tcpdump) you can also see the modulus in there.These were the settings I used to capture all traffic on the local loopback interface, matching the filter expression: tcp and ip and host 127.0.0.1 and port 443. This tells Wireshark to leave out any other information, I may not have been interested in showing you.

    Read the article

  • CodePlex Daily Summary for Sunday, December 09, 2012

    CodePlex Daily Summary for Sunday, December 09, 2012Popular ReleasesMedia Companion: MediaCompanion3.509b: mc_com movie cache unassigned fields bug fixes - votes, movie set & originaltitle were not getting set. No changes to main application from previous release.VidCoder: 1.4.10 Beta: Added progress percent to the title bar/task bar icon. Added MPLS information to Blu-ray titles. Fixed the following display issues in Windows 8: Uncentered text in textbox controls Disabled controls not having gray text making them hard to identify as disabled Drop-down menus having hard-to distinguish white on light-blue text Added more logging to proxy disconnect issues and increased timeout on initial call to help prevent timeouts. Fixed encoding window showing the built-in pre...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.400: Version 2.5.0.400 (Release): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Update the documentation. InfoMan: Write the documentation. Other Downloads Downloads OverviewYnote Classic: Ynote Classic version 1.0: Ynote Classic is a text editor made by SS Corporation. It can help you write code by providing you with different codes for creation of html or batch files. You can also create C/C++ /Java files with SS Ynote Classic. Author of Ynote Classic is Samarjeet Singh. Ynote Classic is available with different themes and skins. It can also compile *.bat files into an executable file. It also has a calculator built within it. 1st version released of 6-12-12 by Samarjeet Singh. Please contact on http:...Http Explorer: httpExplorer-1.1: httpExplorer now has the ability to connect to http server via web proxies. The proxy may be explicitly specified by hostname or IP address. Or it may be specified via the Internet Options settings of Windows. You may also specify credentials to pass to the proxy if the proxy requires them. These credentials may be NTLM or basic authentication (clear text username and password).Bee OPOA Platform: Bee OPOA Demo V1.0.001: Initial version.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.78: Fix for issue #18924 - using -pretty option left in ///#DEBUG blocks. Fix for issue #18980 - bad += optimization caused bug in resulting code. Optimization has been removed pending further review.Selenium PowerShell eXtensions: SePSX 0.4.8: Beta 1: Selenium 2.26, IEDriver 2.26, ChromeDriver 23 Beta 2: Selenium 2.27, IEDriver 2.27, ChromeDriver 23 Beta 3: Selenium 2.27.1, IEDriver 2.27, ChromeDriver 23 This release brings to us several interesting features: ChromeOptions cmdletsThe New-SeChromeOptions, Add-SeChromeArgument, Add-SeChromeExtension and Set-SeChromeBinary cmdlets along with the revisited Start-SeChrome cmdlet give now the full spectrum of possibilities to run a web driver, namely the following seven ways: bare start...Periodic.Net: 0.8: Whats new for Periodic.Net 0.8: New Element Info Dialog New Website MenuItem Minor Bug Fix's, improvements and speed upsHydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.5.11 Experimental Release: This is HydroDesktop 1.5.11 Experimental Release We are targeting for a 1.6 Stable Release in Fall 2012. This experimental version has been published for testing. New Features in 1.5 Time Series Data Import Improved performance of table, graph and edit views Support for online sample project packages (sharing data and analyses) More detailed display of time series metadata Improved extension manager (uninstall extensions, choose extension source) Improved attribute table editor (supports fil...Yahoo! UI Library: YUI Compressor for .Net: Version 2.2.0.0 - Epee: New : Web Optimization package! Cleaned up the nuget packages BugFix: minifying lots of files will now be faster because of a recent regression in some code. (We were instantiating something far too many times).DtPad - .NET Framework text editor: DtPad 2.9.0.40: http://dtpad.diariotraduttore.com/files/images/flag-eng.png English + A new built-in editor for the management of CSV files, including the edit of cells, deleting and adding new rows, replacement of delimiter character and much more (issue #1137) + The limit of rows allowed before the decommissioning of their side panel has been raised (new default: 1.000) (issue #1155, only partially solved) + Pressing CTRL+TAB now DtPad opens a screen that shows the list of opened tabs (issue #1143) + Note...AvalonDock: AvalonDock 2.0.1746: Welcome to the new release of AvalonDock 2.0 This release contains a lot (lot) of bug fixes and some great improvements: Views Caching: Content of Documents and Anchorables is no more recreated everytime user move it. Autohide pane opens really fast now. Two new themes Expression (Dark and Light) and Metro (both of them still in experimental stage). If you already use AD 2.0 or plan to integrate it in your future projects, I'm interested in your ideas for new features: http://avalondock...AcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...SSH.NET Library: 2012.12.3: New feature(s): + SynchronizeDirectoriesQuest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksNew ProjectsBaldur's Gate Party Gold Editor - WPF, Windows Forms MVP-VM sample: MVP-VM sample WPF, Windows Forms MVP-VM sampleEGRemote Studio: EGRemote Studio is a gateway application that enables communication between Eventghost and Google's push messaging service.Emptycanvas: Le projet emptycanvas est destiné aux créations de vidéos en images de synthèse ainsi qu'aux conceptions 3D sur ordinateur.eveCIMS - A Corporation Information and Management System for eve online: eveCIMS is a web application for managing a corporation in CCPs Spaceship Game EVE Online. Google Helper: Google HelperHongloumeng: ???RPG????Mobile Projects LFD: Projeto voltado a agrupar funções simples de plataformas moveis.Momra Transfers: This is transfers project for momraMuhammad Tarmizi bin Kamaruddin's simple free software (primary edition) (BETA): Muhammad Tarmizi bin Kamaruddin's simple free software (primary edition) (BETA)Nth Parameter Series: Silverlight Business application to use the concept of extending the double time series used extensivelly in the finance industry.OMR.EasyBackup: Example of easy real time file system backup project.SharePoint - Web Customization Inheritance: This project allows to inherit customizations of a site to is childrens, including master page, theme and logo. Tennis_HDU: TennisHDUTuto Direct3D 11 SDZ: Code source accompagnant le tutoriel disponible sur le Site du Zéro.Visual Studio 2010 Settings Swapper AddIn: Settings swapper makes it simple to have Visual Studio settings apply per file type. For example, maybe you want to have settings for a aspx MVC page be different from a C# file. The project was created in and tested with Visual Studio 2010 and written in C#.Weak Closure Pattern: The weak closures. Creation of the closure behavior that independent from a compiler.??MVC?????????: ??MVC ??????????,??????MVC3、Autofac、Lucene.net 3.0,??Npoi.net,Nhibernate、quartz.net???????,??????????????????????,????????,????

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • CodePlex Daily Summary for Saturday, December 08, 2012

    CodePlex Daily Summary for Saturday, December 08, 2012Popular ReleasesYnote Classic: Ynote Classic version 1.0: Ynote Classic is a text editor made by SS Corporation. It can help you write code by providing you with different codes for creation of html or batch files. You can also create C/C++ /Java files with SS Ynote Classic. Author of Ynote Classic is Samarjeet Singh. Ynote Classic is available with different themes and skins. It can also compile *.bat files into an executable file. It also has a calculator built within it. 1st version released of 6-12-12 by Samarjeet Singh. Please contact on http:...Http Explorer: httpExplorer-1.1: httpExplorer now has the ability to connect to http server via web proxies. The proxy may be explicitly specified by hostname or IP address. Or it may be specified via the Internet Options settings of Windows. You may also specify credentials to pass to the proxy if the proxy requires them. These credentials may be NTLM or basic authentication (clear text username and password).Bee OPOA Platform: Bee OPOA Demo V1.0.001: Initial version.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.78: Fix for issue #18924 - using -pretty option left in ///#DEBUG blocks. Fix for issue #18980 - bad += optimization caused bug in resulting code. Optimization has been removed pending further review.Torrents-List Organizer: Torrents-list organizer v 0.5.0.0: ????? ? ?????? 0.5.0.0: 1) ????? ? ?????? ?????? ??????????? ???????? ?????????? ? ?????? ????????. ??????????????, ????????? ???????, ??? ????????? ???????? ?????? ("???????? ?? ??????"). 2) ????????? ???????? ???????? ???????-??????. ????? ??????, ?????? ??? ?????? ?????????????? ??? ???????-?????. 3) ?????? ??? ??????? ??????? ?????? ? ????? ????????????? ?????????? ???????????, ? ?????????????, ??????? ?????? ????? ?????? ??????????, ?????????? ???? ??????? ???? ?????? ???????? ??????? ? ...fastJSON: v2.0.11: - bug fix single char number json - added UseEscapedUnicode parameter for controlling string output in \uxxxx for unicode/utf8 format - bug fix null and generic ToObject<>() - bug fix List<> of custom typesMedia Companion: MediaCompanion3.508b: Recommended Download - Fixes IMDB title scrape bug and several mc_com movie and actor cache bugs.Measure It: MeasureIt v0.2.1: Updated with lots of bug fixes and support for interactive measurements in LinqPadPeriodic.Net: 0.8: Whats new for Periodic.Net 0.8: New Element Info Dialog New Website MenuItem Minor Bug Fix's, improvements and speed upsYahoo! UI Library: YUI Compressor for .Net: Version 2.2.0.0 - Epee: New : Web Optimization package! Cleaned up the nuget packages BugFix: minifying lots of files will now be faster because of a recent regression in some code. (We were instantiating something far too many times).DtPad - .NET Framework text editor: DtPad 2.9.0.40: http://dtpad.diariotraduttore.com/files/images/flag-eng.png English + A new built-in editor for the management of CSV files, including the edit of cells, deleting and adding new rows, replacement of delimiter character and much more (issue #1137) + The limit of rows allowed before the decommissioning of their side panel has been raised (new default: 1.000) (issue #1155, only partially solved) + Pressing CTRL+TAB now DtPad opens a screen that shows the list of opened tabs (issue #1143) + Note...AvalonDock: AvalonDock 2.0.1746: Welcome to the new release of AvalonDock 2.0 This release contains a lot (lot) of bug fixes and some great improvements: Views Caching: Content of Documents and Anchorables is no more recreated everytime user move it. Autohide pane opens really fast now. Two new themes Expression (Dark and Light) and Metro (both of them still in experimental stage). If you already use AD 2.0 or plan to integrate it in your future projects, I'm interested in your ideas for new features: http://avalondock...AcDown?????: AcDown????? v4.3.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ?? v4.3.2?? ?????????????????? ??Acfun??????? ??Bilibili?????? ??Bilibili???????????? ??Bilibili????????? ??????????????? ???? ??Bilibili??????? ????32??64? Windows XP/...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.2: ??FineUI ?? ExtJS ??? ASP.NET 2.0 ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License 2.0 (Apache) ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ ???? +2012-12-03 v3.2.2 -?????????????,?????button/button_menu.aspx(????)。 +?Window????Plain??;?ToolbarPosition??Footer??;?????FooterBarAlign??。 -????win...Player Framework by Microsoft: Player Framework for Windows Phone 8: This is a brand new version of the Player Framework for Windows Phone, available exclusively for Windows Phone 8, and now based upon the Player Framework for Windows 8. While this new version is not backward compatible with Windows Phone 7 (get that http://smf.codeplex.com/releases/view/88970), it does offer the same great feature set plus dozens of new features such as advertising, localization support, and improved skinning. Click here for more information about what's new in the Windows P...SSH.NET Library: 2012.12.3: New feature(s): + SynchronizeDirectoriesQuest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300New ProjectsAqui Estoy ( IP announcing Tool): Aqui Estoy its a tool that once installed on a computer announces its IP to another computer, so it can be localized, it can be used to find computer that have a dinamic IP. Developed with vb.net and sockets technology. Bing Maps for Windows Store Apps Training Kit: This training kit consists of a power point slide deck which gives an overview of how to create a Windows Store App that uses Bing Maps. BTFramework: Beauty Code FrameworkCAML Builder: Small and simple API which allows you to easily write CAML queries, in a declarative way.Easy sound: Easy sound is a .net tool for managing audio stream in the memory. It joins existing wav streams into single one using different method.EBusiness: Little Prototype of an education ProjectField Validator: Silverlight 3 Field Validator finalproject: Web 2.0 project about Real Madrid Community.fossilGui: Gui for fossil (http://www.fossil-scm.org)HubLog: Tool for application administrator: collects and joins logs from multiple instances of an application, and displays them. Example of usage in C#: the telnet protocol, dynamically compiled functions as elements of configuration.Implementing Google Protocol Buffers using C#: Demonstration for Implementing Google Protocol Buffers using C# Jarvis PSO Course Project: Project for the course Programmazione di Sistema taught @ Politecnico di TorinoL3374tw: Translates text into L337log4net Dynamics CRM 2011 Appender: log4net Dynamics CRM 2011 AppenderNote Garden: Note Garden is my extension to NodeGarden (http://alphalabs.codeplex.com) for Windows Phone 8, using Silverlight. NV2: This library will support playback of v2m files.Paste As Plugin For Windows Live Writer: The Paste As Plugin for Windows Live Writer is a simple plugin which steamlines the pasting of text and HTML into a WLW post.PMS_LSI: PMS LSIPoker Calculator: Monte Poker is poker utility which calculates probabilities of handspostleitzahlensuche: Postleitzahlen Suche C# WPF Applikation; Suche nach Postleitzahlen oder Orten => Liefert als Ergebnis eine Liste von übereinstimmenden Orten mit deren PLZsPythagorean Theorem in WPF: This project shows off using the Pythagorean Theorem in a simple drawing WPF Application. Measuring the distance between 2 objects in WPF can easily be achieved using this approach.Resource management language: This project is a try to create an automata-based resource management language in C#. Resource management means the work with computational resources (possibly threads, processors) in order to execute a program.Resource-Based Economy: A .Net framework to help with the implementation of a global Resource-Based Economy.Sinbiota 2.0 prototype: SinBiota 2.0 prototype is an open source biodiversity information system, which allows the presentation of biodiversity data through an enhanced GIS framework.Snapword: Work in progress.UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application interface (classically display child controls inside application window).UnitFundProfitability: This program helps to calculate real profitability of your investments in unit funds. It knows about markdown rate, markup rate, taxes and other payments, which decrease declaring profitability.Z3 Test Suite: Test suite for the Z3 theorem prover.

    Read the article

  • CodePlex Daily Summary for Friday, October 25, 2013

    CodePlex Daily Summary for Friday, October 25, 2013Popular Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.9.5 Stable: Do you like this piece of software ? It took some time and effort to develop. Please consider a helping me with a donation Or please visit my blog Code : Rewritten the launcher of 7zip whith "old" fashioned batch. Better handling of exit codes Feat : New argument --notifyextra to drive the way extra information is delivered with the notification log Bug : NoFollowJunctions switch was inverted Feat : Added new directives maxfilesize and minfilesize to enhance file selection upon their ...Gac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.5.5.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Add 1 demos Editor.Toolstrip.Document Added new features GuiDocumentViewer and GuiDocumentLabel is editable like an RichTextEdit control.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.7: This is a bug fix release, containing some important fixes! Fixed issue where Session 0 was not detected correctly, resulting in issues when attempting to display a UI when none was allowed Fixed Installation Prompt and Installation Restart Prompt appearing when deploy mode was non-interactive or silent Fixed issue where defer prompt is displayed after force closing multiple applications Fixed issue executing blocked app execution dialog from UNC path (executed instead from local tempo...BlackJumboDog: Ver5.9.7: 2013.10.24 Ver5.9.7 (1)FTP???????、2?????????????shift-jis????????????? (2)????HTTP????、???????POST??????????????????CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.1.0.34322 Alpha 4: This experimental release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-1-0-34322-alpha-4 Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or sup...ImapX 2: ImapX 2.0.0.13: The long awaited ImapX 2.0.0.13 release. The library has been rewritten from scratch, massive optimizations and refactoring done. Several new features introduced. Added support for Mono. Added support for Windows Phone 7.1 and 8.0 Added support for .Net 2.0, 3.0 Simplified connecting to server by reducing the number of parameters and adding automatic port selection. Changed authentication handling to be universal and allowing to build custom providers. Added advanced server featur...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 32 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) This release has been tested with Visual Studio 2008, 2010, 2012 and 2013, using TortoiseSVN 1.6, 1.7 and 1.8. It should also still work with Visual Studio 2005, but I couldn't find anyone to test it in VS2005. Build 32 (beta) changelogNew: Added Visual Studio 2013 support New: Added Visual Studio 2012 support New: Added SVN 1.8 support New: Added 'Ch...ABCat: ABCat v.2.0.1a: ?????????? ???????? ? ?????????? ?????? ???? ??? Win7. ????????? ?????? ????????? ?? ???????. ????? ?????, ???? ????? ???????? ????????? ?????????? ????????? "?? ??????? ????? ???????????? ?????????? ??????...", ?? ?????????? ??????? ? ?????????? ?????? Microsoft SQL Ce ?? ????????? ??????: http://www.microsoft.com/en-us/download/details.aspx?id=17876. ???????? ?????? x64 ??? x86 ? ??????????? ?? ?????? ???????????? ???????. ??? ??????? ????????? ?? ?????????? ?????? Entity Framework, ? ???? ...patterns & practices: Data Access Guidance: Data Access Guidance 2013: This is the 2013 release of Data Access Guidance. The documentation for this RI is also available on MSDN: Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence: http://msdn.microsoft.com/en-us/library/dn271399.aspxMedia Companion: Media Companion MC3.584b: IMDB changes fixed. Fixed* mc_com.exe - Fixed to using new profile entries. * Movie - fixed rename movie and folder if use foldername selected. * Movie - Alt Edit Movie, trailer url check if changed and confirm valid. * Movie - Fixed IMDB poster scraping * Movie - Fixed outline and Plot scraping, including removal of Hyperlink's. * Movie Poster refactoring, attempts to catch gdi+ errors Revision HistoryJayData -The unified data access library for JavaScript: JayData 1.3.4: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.4 For detailed release notes check ...TerrariViewer: TerrariViewer v7.2 [Terraria Inventory Editor]: Added "Check for Update" button Hopefully fixed Windows XP issue You can now backspace in Item stack fieldsVirtual Wifi Hotspot for Windows 7 & 8: Virtual Router Plus 2.6.0: Virtual Router Plus 2.6.0Fast YouTube Downloader: Fast YouTube Downloader 2.3.0: Fast YouTube DownloaderMagick.NET: Magick.NET 6.8.7.101: Magick.NET linked with ImageMagick 6.8.7.1. Breaking changes: - Renamed Matrix classes: MatrixColor = ColorMatrix and MatrixConvolve = ConvolveMatrix. - Renamed Depth method with Channels parameter to BitDepth and changed the other method into a property.VidCoder: 1.5.9 Beta: Added Rip DVD and Rip Blu-ray AutoPlay actions for Windows: now you can have VidCoder start up and scan a disc when you insert it. Go to Start -> AutoPlay to set it up. Added error message for Windows XP users rather than letting it crash. Removed "quality" preset from list for QSV as it currently doesn't offer much improvement. Changed installer to ignore version number when copying files over. Should reduce the chances of a bug from me forgetting to increment a version number. Fixed ...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksmyCollections: Version 2.8.7.0: New in this version : Added Public Rating Added Collection Number Added Order by Collection Number Improved XBMC integrations Play on music item will now launch default player. Settings are now saved in database. Tooltip now display sort information. Fix Issue with Stars on card view. Fix Bug with PDF Export. Fix Bug with technical information's. Fix HotMovies Provider. Improved Performance on Save. Bug FixingMoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...New ProjectsAdder: Adder is simply converter for WPF binding. They add a value from parameter to target if a target int or double, concatenate parameter and value if type a strinAir Control ATV: App for Windows Phone to remote control Apple TV with FireCore aTV Flash (or Black) and AirControl installed.Bangladeshi Open Source Windows 8/Phone Apps: The largest Bangladeshi Open Source project with a vision. Mostly community-contributed Windows 8 and Windows Phone apps.BTB: Tumor board presentation tool for oncologyBulletin: not completed yet. the basic functions are ready, though.CRM Dashboard for Lync: Dynamics CRM Dashboard for Lync is an add-on that makes both applications seamlessly communicate between one another.Crowd CMS: Crowd CMS - A Crowd Funded Web Content Management System Built Using ASP.Net MVC 4 Website: http://www.crowdcms.co.uk CrowdCube: http://goo.gl/Mnd1Xsdusanproject: It is a project for Web Scripting and Application Development (COM), School of Computer of Science, University of HertfordshireElectronic Integrated Disease Surveillance System (EIDSS™) - Web Version: EIDSS can be configured as an electronic disease surveillance network, implemented in a region, to support public health practitioners and epidemiologists.Facebook Login Tool: Facebook Login ToolGadgeteer interface: Project aimed at creating common wrappers and interfaces for common components. This would allow for building applications based on the interfaces, this should Madoko: Madoko is a fast javascript Markdown processor written in Koka. It can render beautiful HTML and PDF (via LaTeX) and supports many Markdown extensions.MinotaurTeam: Project Management System using ASP.NET MVCmobSocial: Open Source Social Network for free!Online Room Reservation for Exchange: When you have Microsoft Exchange Server and meeting rooms you share with people from outside your company this is the ideal application for you.Pescar2013-shop-Masapan: aproyecto_Andy_y_Meli: mely and andyQMachine: A platform for World Wide Computingquarkz: It's just a try-out of Visual C++ (on example of Windows Forms Application). In this project I tried out to re-write one simple Flash game called quarkz in VC++TFS Workspaces Cleaner: TFS Workspaces Cleaner deletes Team Foundation Server workspaces that have not been accessed in a number of days, along with their files locally on disk.Ticketing System With ASP.NET MVC: Ticketing System where visitors (without authentication) should be able to view most commented tickets, as well as to register and login in the system. RegisterTotalFreedom Angular JS MVC: This library provides more tight integration between C# classes and AngularJS, helps to those who prefer to use as much C# and as little JavaScript as possiblevWordToHtml: vWordToHtmlvwpHistory: vwpHistoryweibowinform: weibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformXompare - XML files comparison: Xompare - XML files comparison

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Linux Server hacked?

    - by user115848
    I'm trying to determine if this linex webserver/openfire server has been compromised by some form of malware or a hacker. Can you please help me determine if this server has been hacked? The snippet of logs below are from the linux server running apache. A few days ago the moodle site, which is installed on the server, started to render the apache default page. Also the access logs show some activity im not sure of. Please see logs below. 85.190.0.3 - - [02/Apr/2012:13:31:01 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:13:31:01 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 99.41.69.92 - - [02/Apr/2012:13:33:35 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "curl/7.18.0 (x86_64-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.1" 212.34.151.92 - - [02/Apr/2012:14:01:46 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "-" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 212.34.151.92 - - [02/Apr/2012:14:01:46 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "http://173.164.35.181/phpmyadmin/scripts/setup.php\r" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 82.223.140.4 - - [02/Apr/2012:14:05:03 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "-" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 82.223.140.4 - - [02/Apr/2012:14:05:04 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php\r" "Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]" 10.0.0.100 - - [02/Apr/2012:14:25:35 -0600] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110330 CentOS/3.6-1.el5.centos Firefox/3.6.15" 10.0.0.100 - - [02/Apr/2012:14:25:38 -0600] "GET /favicon.ico HTTP/1.1" 404 295 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110330 CentOS/3.6-1.el5.centos Firefox/3.6.15" 50.17.41.60 - - [02/Apr/2012:14:27:29 -0600] "HEAD /icons/apache_pb.gif HTTP/1.0" 200 - "-" "Mozilla/5.0 (compatible; NetcraftSurveyAgent/1.0; [email protected])" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:33 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:14:42:36 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:03:48 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 66.233.63.54 - - [02/Apr/2012:15:12:19 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0" 70.114.161.135 - - [02/Apr/2012:15:17:12 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" 99.41.69.231 - - [02/Apr/2012:15:52:21 -0600] "GET /files/externallibs.php HTTP/1.1" 404 306 "-" "curl/7.18.0 (x86_64-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.1" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:15:55:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 10.0.0.253 - - [02/Apr/2012:16:01:45 -0600] "GET / HTTP/1.1" 403 5043 "-" "WWW-Mechanize/1.0.0 (http://rubyforge.org/projects/mechanize/)" 10.0.0.253 - - [02/Apr/2012:16:02:27 -0600] "GET / HTTP/1.1" 403 5043 "-" "WWW-Mechanize/1.0.0 (http://rubyforge.org/projects/mechanize/)" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "POST _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "GET _http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:13:40 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 89.135.124.125 - - [02/Apr/2012:16:20:47 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php" "Opera" 89.135.124.125 - - [02/Apr/2012:16:20:48 -0600] "POST /phpmyadmin/scripts/setup.php HTTP/1.1" 404 305 "_http://173.164.35.181/phpmyadmin/scripts/setup.php" "Opera" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "GET http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "CONNECT 213.92.8.7:31204 HTTP/1.0" 405 303 "-" "-" 85.190.0.3 - - [02/Apr/2012:16:29:59 -0600] "POST http://vlad-tepes.bofh.it/freenode-proxy-checker.txt HTTP/1.0" 404 307 "-" "-"

    Read the article

  • IIS 7 URL rewrite rule

    - by Andrew
    Hello, guys! We have here to web servers behind a router - one IIS and one Tomcat (on different machines / IP addresses). The domain is pointing to out external IP, which is forwarded to IIS (internal IP 192.168.1.10 for example). I'm trying to do the following: when [www.]ourdomain.com is entered the default web site on IIS have to be loaded (this part is ok), but when test.ourdomain.com is entered I want to redirect this request to another web server (192.168.1.11 for example). I created a site "test" on IIS and it is displayed when test.ourdomain.com is entered. Then I tried to redirect it with following rule: Requested URL matches the pattern: * (using wildcards) Condition: {HTTP_HOST} matches test.ourdomain.com Action type: Rewrite Rewrite URL: http://192.168.1.11/{R:0} but when I try to load test.ourdomain.com now I get IIS's error 404 page. Obviously I'm wrong :-) How can I do such a redirect?

    Read the article

  • Unable to find valid certification path to requested target while CAS authentication

    - by Dmitriy Sukharev
    I'm trying to configure CAS authentication. It requires both CAS and client application to use HTTPS protocol. Unfortunately we should use self-signed certificate (with CN that doesn't have anything in common with our server). Also the server is behind firewall and we have only two ports (ssh and https) visible. As far as there're several application that should be visible externally, we use Apache for ajp reverse proxying requests to these applications. Secure connections are managed by Apache, and all Tomcat are not configured to work with SSL. But I obtained exception while authentication, therefore desided to set keystore in CATALINA_OPTS: export CATALINA_OPTS="-Djavax.net.ssl.keyStore=/path/to/tomcat/ssl/cert.pfx -Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.keyAlias=alias -Djavax.net.debug=ssl" cert.pfx was obtained from certificate and key that are used by Apache HTTP Server: $ openssl pkcs12 -export -out /path/to/tomcat/ssl/cert.pfx -inkey /path/to/apache2/ssl/server-key.pem -in /path/to/apache2/ssl/server-cert.pem When I try to authenticate a user I obtain the following exception: Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:174) ~[na:1.6.0_32] at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238) ~[na:1.6.0_32] at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:318) ~[na:1.6.0_32] Meanwhile I can see in catalina.out that Tomcat see certificate in cert.pfx and it's the same as the one that is used while authentication: 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Constructing validation url: https://external-ip/cas/proxyValidate?pgtUrl=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_proxyreceptor&ticket=ST-17-PN26WtdsZqNmpUBS59RC-cas&service=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_check 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. keyStore is : /path/to/tomcat/ssl/cert.pfx keyStore type is : PKCS12 keyStore provider is : init keystore init keymanager of type SunX509 *** found key for : 1 chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** trustStore is: /jdk-home-folder/jre/lib/security/cacerts Here is a lot of trusted CAs. Here is nothing related to our certicate or our (not trusted) CA. ... 09:11:39.731 [http-bio-8080-exec-4] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. Allow unsafe renegotiation: false Allow legacy hello messages: true Is initial handshake: true Is secure renegotiation: false %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 63, 239, 180, 32, 103, 140, 83, 7, 109, 149, 177, 80, 223, 79, 243, 244, 60, 191, 124, 139, 108, 5, 122, 238, 146, 1, 54, 218 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV] Compression Methods: { 0 } *** http-bio-8080-exec-4, WRITE: TLSv1 Handshake, length = 75 http-bio-8080-exec-4, WRITE: SSLv2 client hello message, length = 101 http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 81 *** ServerHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 145, 237, 232, 63, 240, 104, 234, 201, 148, 235, 12, 222, 60, 75, 174, 0, 103, 38, 196, 181, 27, 226, 243, 61, 34, 7, 107, 72 } Session ID: {79, 202, 117, 79, 130, 216, 168, 38, 68, 29, 182, 82, 16, 25, 251, 66, 93, 108, 49, 133, 92, 108, 198, 23, 120, 120, 135, 151, 15, 13, 199, 87} Cipher Suite: SSL_RSA_WITH_RC4_128_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** %% Created: [Session-2, SSL_RSA_WITH_RC4_128_SHA] ** SSL_RSA_WITH_RC4_128_SHA http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 609 *** Certificate chain chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** http-bio-8080-exec-4, SEND TLSv1 ALERT: fatal, description = certificate_unknown http-bio-8080-exec-4, WRITE: TLSv1 Alert, length = 2 http-bio-8080-exec-4, called closeSocket() http-bio-8080-exec-4, handling exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target I tried to convert our pem certificate to der format and imported it to trustedKeyStore (cacerts) (without private key), but it didn't change anything. But I'm not confident that I did it rigth. Also I must inform you that I don't know passphrase for our servier-key.pem file, and probably it differs from password for keystore created by me. OS: CentOS 6.2 Architecture: x64 Tomcat version: 7 Apache HTTP Server version: 2.4 Is there any way to make Tomcat accepts our certificate?

    Read the article

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • Adding Thunderbird-stable repository gives "can't find signing_key_fingerprint" error

    - by EBV2010
    I'm trying to install Thunderbird 11 on Kubuntu 10.04. I was able to do it on the machine I'm working on. To get a clean process that I can roll out to other clients, I re-installed the machine and repeated the process. This is what I did (I've left out the sudo for clarity): add-apt-repository ppa:ubuntu-mozilla-security/ppa apt-get update add-apt-repository ppa:mozilla-team/thunderbird-stable The last one resulted in this error: Error: can't find signing_key_fingerprint at https://launchpad.net/api/1.0/~mozilla-team/+archive/thunderbird-stable The machine as it was before re-installation gave no such message. It was built from the same sources. Bottomline: I got Thunderbird 11.0 to run on Kubuntu 10.04 but after re-installation, adding the repository gives an error and won't add. Is there a way to solve the signing_key_fingerprint error?

    Read the article

  • Server HTTP Load times slow?

    - by cdog5000
    Hello, My server @ codemeh.com (HTTP Server) seems to be randomly loading slowly, I cannot tell if it just my forums (http://www.codemeh.com/forums/) that are loading slowly or if the WHOLE site is just loading slowly since my forums are the largest thing on the site right now. load average: 0.02, 0.17, 0.20 That is super low to my knowledge. I have tried Google Page Analytic plug-in for FireFox to solve the problem but nothing comes up that is VERY bad. If someone could investigate this for me since I am very new at apache and server configurations. Thanks! (top): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7493 www-data 15 0 98.2m 16m 9092 S 3 0.8 0:27.24 apache2 26429 www-data 15 0 98.2m 15m 7392 S 3 0.7 0:03.45 apache2 26477 www-data 17 0 98.2m 15m 7396 S 3 0.7 0:03.16 apache2 1 root 15 0 2468 1384 1156 S 0 0.1 0:00.49 init 1367 root 25 0 2564 816 660 S 0 0.0 0:00.00 xinetd 1526 root 15 0 29576 5420 1976 S 0 0.3 1:02.69 fail2ban-server 3703 root 15 0 13512 9312 1696 S 0 0.4 0:11.59 miniserv.pl 3915 postfix 15 0 6056 1652 1320 S 0 0.1 0:00.00 pickup 4010 root 15 0 4548 1296 972 S 0 0.1 0:37.36 ntpd 7448 root 15 0 98528 26m 20m S 0 1.3 0:00.27 apache2 7454 www-data 18 0 33580 2616 368 S 0 0.1 0:00.04 apache2 7528 www-data 18 0 108m 24m 15m S 0 1.2 0:27.60 apache2 7974 root 16 0 8700 2728 2164 S 0 0.1 0:00.08 sshd 8123 cdog5000 15 0 8832 1596 896 S 0 0.1 0:00.00 sshd 8126 cdog5000 18 0 4484 1716 1384 S 0 0.1 0:00.00 bash 8141 cdog5000 15 0 2344 980 796 R 0 0.0 0:00.11 top 13461 root 15 0 8700 2728 2164 S 0 0.1 0:00.07 sshd 13567 cdog5000 18 0 8832 1492 896 S 0 0.1 0:00.33 sshd 13569 cdog5000 18 0 4484 1728 1388 S 0 0.1 0:00.09 bash 17983 root 15 0 4392 1268 988 S 0 0.1 0:00.00 su 17987 root 15 0 4516 1752 1380 S 0 0.1 0:00.09 bash 18081 www-data 15 0 98.2m 14m 6588 S 0 0.7 0:04.91 apache2 20000 www-data 15 0 98.3m 15m 8040 S 0 0.8 0:02.45 apache2 20019 www-data 15 0 98.2m 14m 6808 S 0 0.7 0:04.97 apache2 30343 root 15 0 3964 1012 764 S 0 0.0 0:00.03 vsftpd 30382 root 15 0 2304 908 716 S 0 0.0 0:00.62 cron 30401 mysql 17 0 141m 17m 5416 S 0 0.9 1:02.20 mysqld 30424 root 15 0 5472 912 504 S 0 0.0 0:00.04 sshd 30473 syslog 15 0 1916 676 536 S 0 0.0 0:01.02 syslogd 30611 amavis 15 0 33872 25m 2292 S 0 1.2 0:03.11 amavisd-new 31890 amavis 18 0 34888 24m 1792 S 0 1.2 0:00.00 amavisd-new 31891 amavis 18 0 34888 24m 1784 S 0 1.2 0:00.00 amavisd-new 32397 clamav 18 0 104m 84m 1272 S 0 4.1 1:06.46 clamd 32563 clamav 15 0 12832 5716 4440 S 0 0.3 0:01.29 freshclam 32573 root 23 0 1892 456 372 S 0 0.0 0:00.00 courierlogger 32575 root 18 0 2096 684 544 S 0 0.0 0:00.01 authdaemond 32583 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32584 root 24 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32598 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32599 root 25 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32604 root 18 0 1892 460 372 S 0 0.0 0:00.00 courierlogger 32605 root 18 0 2000 624 532 S 0 0.0 0:00.00 couriertcpd 32607 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32608 root 18 0 2096 260 116 S 0 0.0 0:00.03 authdaemond 32609 root 15 0 2308 404 256 S 0 0.0 0:00.03 authdaemond 32610 root 18 0 2096 260 116 S 0 0.0 0:00.02 authdaemond 32612 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32621 root 24 0 1892 364 284 S 0 0.0 0:00.00 courierlogger 32622 root 25 0 2000 608 516 S 0 0.0 0:00.00 couriertcpd 32633 root 15 0 105m 936 716 S 0 0.0 0:02.26 nscd 32719 root 16 0 6252 1680 1344 S 0 0.1 0:01.24 master 32738 postfix 15 0 6188 1776 1400 S 0 0.1 0:00.44 qmgr 32758 postfix 15 0 6492 2564 1788 S 0 0.1 0:00.14 tlsmgr (/etc/apache2/sites-available/default): NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/web1/web/ <Directory /var/www/web1/web/> Options Indexes MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have fail2ban server and I dont have any firewall at this point and time that I know of. SMF is 2.0 RC4 and apache version is 2.2.14. I run a MySQL server on another box in the same DC (Persistent Connection). I installed eAccelerator today and it didnt help.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Cannot access SVN repository from another host within the LAN

    - by akaii
    I'm trying to connect to a repository I've set up on our server from another host on the same network, but the connection is failing. checkout command: svn checkout svn://192.168.11.192/ error: Can't connect to host '192.168.11.192' : Connection refused I tried probing port 3690 with telnet, and I can't seem to connect that way either. I thought the port might be blocked, so I added an entry for port 3690 in sysconfig/iptables, but it doesn't seem to have had any effect at all. I'm sure svnserve is running, because I can checkout the repository on server using the same command above. What can I possibly try next?

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >