Search Results

Search found 1347 results on 54 pages for 'amnesia 55'.

Page 32/54 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Timely automatic unexpected reboot on ubuntu desktop

    - by ahmad
    We have a remote linux server (ubuntu desktop). The system log indicates the system has been restarted on the timely fashion. Here is a part of the last output: ut pts/0 192.169.50.2-sta Sat Nov 24 22:17 still logged in reboot system boot 2.6.32-21-generi Sat Nov 24 22:04 - 22:17 (00:13) ut pts/0 server.local Sat Nov 24 21:36 - crash (00:27) reboot system boot 2.6.32-21-generi Sat Nov 24 15:55 - 22:17 (06:21) reboot system boot 2.6.32-21-generi Fri Nov 23 18:02 - 22:17 (1+04:14) reboot system boot 2.6.32-21-generi Fri Nov 23 10:39 - 22:17 (1+11:38) reboot system boot 2.6.32-21-generi Fri Nov 23 04:18 - 22:17 (1+17:59) reboot system boot 2.6.32-21-generi Fri Nov 23 03:57 - 22:17 (1+18:20) reboot system boot 2.6.32-21-generi Thu Nov 22 20:38 - 22:17 (2+01:38) reboot system boot 2.6.32-21-generi Thu Nov 22 11:13 - 22:17 (2+11:03) reboot system boot 2.6.32-21-generi Thu Nov 22 08:12 - 22:17 (2+14:05) reboot system boot 2.6.32-21-generi Wed Nov 21 11:16 - 22:17 (3+11:00) reboot system boot 2.6.32-21-generi Tue Nov 20 22:36 - 22:17 (3+23:41) reboot system boot 2.6.32-21-generi Tue Nov 20 14:12 - 22:17 (4+08:05) reboot system boot 2.6.32-21-generi Tue Nov 20 11:32 - 22:17 (4+10:44) reboot system boot 2.6.32-21-generi Tue Nov 20 01:52 - 22:17 (4+20:25) reboot system boot 2.6.32-21-generi Tue Nov 20 00:22 - 22:17 (4+21:55) reboot system boot 2.6.32-21-generi Mon Nov 19 17:27 - 22:17 (5+04:50) It looks the system is set to be restarted at 22:17. Can anyone guide me why this happens? Thanks in advance.

    Read the article

  • TCPDump and IPTables DROP by string

    - by Tiffany Walker
    by using tcpdump -nlASX -s 0 -vvv port 80 I get something like: 14:58:55.121160 IP (tos 0x0, ttl 64, id 49764, offset 0, flags [DF], proto TCP (6), length 1480) 206.72.206.58.http > 2.187.196.7.4624: Flags [.], cksum 0x6900 (incorrect -> 0xcd18), seq 1672149449:1672150889, ack 4202197968, win 15340, length 1440 0x0000: 4500 05c8 c264 4000 4006 0f86 ce48 ce3a E....d@[email protected].: 0x0010: 02bb c407 0050 1210 63aa f9c9 fa78 73d0 .....P..c....xs. 0x0020: 5010 3bec 6900 0000 0f29 95cc fac4 2854 P.;.i....)....(T 0x0030: c0e7 3384 e89a 74fa 8d8c a069 f93f fc40 ..3...t....i.?.@ 0x0040: 1561 af61 1cf3 0d9c 3460 aa23 0b54 aac0 .a.a....4`.#.T.. 0x0050: 5090 ced1 b7bf 8857 c476 e1c0 8814 81ed P......W.v...... 0x0060: 9e85 87e8 d693 b637 bd3a 56ef c5fa 77e8 .......7.:V...w. 0x0070: 3035 743a 283e 89c7 ced8 c7c1 cff9 6ca3 05t:(>........l. 0x0080: 5f3f 0162 ebf1 419e c410 7180 7cd0 29e1 _?.b..A...q.|.). 0x0090: fec9 c708 0f01 9b2f a96b 20fe b95a 31cf ......./.k...Z1. 0x00a0: 8166 3612 bac9 4e8d 7087 4974 0063 1270 .f6...N.p.It.c.p What do I pull to use IPTables to block via string. Or is there a better way to block attacks that have something in common? Question is: Can I pick any piece from that IP packet and call it a string? iptables -A INPUT -m string --alog bm --string attack_string -j DROP In other words: In some cases I can ban with TTL=xxx and use that should an attack have the same TTL. Sure it will block some legit packets but if it means keeping the box up it works till the attack goes away but I would like to LEARN how to FIND other common things in a packet to block with IPTables

    Read the article

  • Laptop Overheating with Windows 8

    - by Dany Khalife
    I recently installed Windows 8 on my HP G62 Laptop and i have been noticing a very strange problem with it. Let alone, for lets say 5 minutes, without even touching it, it starts to heat up and it reaches about 60 degrees (Celsius) with absolutely no applications open (not just on desktop but overall). I dug in a little deep and found out that Maintenance was running when the computer was Idle, so i turned that off From the System's Task Scheduler, and while there i also turned off other services i did not need hoping that would solve the problem. So after a few days, i noticed that the average temperature of my laptop dropped from 55 to 48 degrees while working on Visual Studio. And when i thought the problem had disappeared, it still did show up, only not after 5 minutes, but more like 10 minutes... Here is what i have done so far: Replacing the thermal paste on the CPU and the fan and cleaning the fan (this was like 6 months ago) Using a laptop cooler Running a virus scan (i just formatted my laptop so it would be really weird if i already caught something but who knows) Right now, i believe it has something to do with my gfx driver (Even though it IS up to date, looking closely at the screen, i can see the pixels slowly refresh (kinda like watching static on TV) which i wasn't able to do on Windows 7. If you have any ideas, let me know. Thanks

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Reliable router with good VPN and WAN Throughput [closed]

    - by Asdande
    I have 2 cisco rv180 VPN router. These routers are giving me lots of problems. The webpages wont load correctly, slow response to load webpages plus other many issues. I have several cases pending with cisco. I give up on these routers. I would like to know if you guys can recommend me a reliable router for our 3 branches (NY - main, SC and FL). In NY- main office, we have 55 users. In SC branch, 6 users. In Florida we only have 1 (will grow soon). I need a router capable of support: 3 VPNs Site-to-Site connection VPN throughput of at least 40-50 Mbps WAN throughput at least 100 Mpbs and up PPTP Server for at least 5 PPTP users Web filtering - all users need access to internet Good Firewall Port forwarding for FTP Server - able to show the public IPs of FTP users (rv180 cannot do that, just shows me router's LAN interface IP, opened a case with cisco, now escaleted to level 2, still no answer or workaround) Dual WAN ports for balance or backup internet. Gigabit WAN/LAN ports Price between $400-$500 range. I was thinking on the TP-LINK TL-ER6120 or TL-ER6020 according to the review on smallnetbuilder.com http://www.smallnetbuilder.com/lanwan/lanwan-reviews/31983-tp-link-tl-er6020-safestream-gigabit-dual-wan-vpn-router-reviewed but I don't want to make another mistake as I did when I bought the cisco RV180. Thank you in advance,

    Read the article

  • Slow tracepath on local LAN

    - by Simone Falcini
    I am on EXSi and I have 2 instances: Ubuntu and CentOS. These are the network configurations Ubuntu eth0 Link encap:Ethernet HWaddr 00:50:56:00:1f:68 inet addr:212.83.153.71 Bcast:212.83.153.71 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76059 errors:0 dropped:26 overruns:0 frame:0 TX packets:7224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6482760 (6.4 MB) TX bytes:2080684 (2.0 MB) eth1 Link encap:Ethernet HWaddr 00:0c:29:46:5a:f2 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:252 errors:0 dropped:0 overruns:0 frame:0 TX packets:608 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42460 (42.4 KB) TX bytes:82474 (82.4 KB) /etc/iptables.conf *nat :PREROUTING ACCEPT [142:12571] :INPUT ACCEPT [5:1076] :OUTPUT ACCEPT [8:496] :POSTROUTING ACCEPT [8:496] -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [2:72] :FORWARD ACCEPT [4:336] :OUTPUT ACCEPT [6:328] -A INPUT -i eth1 -p tcp -j ACCEPT -A INPUT -i eth1 -p udp -j ACCEPT -A INPUT -i eth0 -p tcp --dport ssh -j ACCEPT COMMIT CentOS eth0 Link encap:Ethernet HWaddr 00:0C:29:74:1C:55 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe74:1c55/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:499 errors:0 dropped:0 overruns:0 frame:0 TX packets:475 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68326 (66.7 KiB) TX bytes:82641 (80.7 KiB) The main problem is that if i execute this command from the CentOS instance ssh 192.168.1.2 it takes more than 20s to connect. It seems like it's routing the connection to the wrong network. What could it be? Thanks!

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Fixing a corrupted Windows Server 2003 server

    - by Keith
    I have a Windows Server 2003 server that is being mainly used for some reporting done in SQL Server. Recently Windows has started complaining about being corrupted, we are getting an NTFS error 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume1. The server is RAID 5 and I did have a disk die however the RAID never went degraded since I have a hotspare. I replaced the hot spare and I'm still having problems. When I run chkdsk I get tons of messages.. some are: Deleting corrupted attribute record (128, "") from file record segment 194746 Those go on for a while. Then it deletes some orphan files. Then it does Correcting error in index $I30 for file 132426 And that goes on for a while. Then I get tons of Recovering orphaned file RE1AB6~1.LOG into directory file 534959 I have seen a lot of errors relating to the SQL Server reporting services. What are my options at this point? I would prefer to fix the issue instead of building a new server but I don't know if I can at this point.

    Read the article

  • What LPR arguments do I need to print a 1400x800 pixel image on a 4x6 label?

    - by Nick
    This is driving me nuts. UPS sends our system a 1400x800 GIF image of a shipping label, which is supposed to fit nicely on a 4x6 page. Unfortunately, I can't seem to get the command line options right to make it happen. We're using an Eltron/Zebra 2844 with a network adapter, and printing from our Ubuntu 8.04 server using CUPS. We're using the correct drivers, and test pages print correctly. No matter what I try though, it insists on printing the UPS labels accross 6 pages, with a little bit of the label on each page, or way too small. I've tried a bazillion different lpr settings, most of them producing garbage. The closest I've gotten is this: lpr -P Eltron2844 -o natural-scaling=55 -o page-right=0 -o page-left=0 -o landscape -o media="4x6" ./1ZY437560399620027.gif but it causes the image to be too small on the page. It's about an inch too short, and there's a 1/2" margin on both sides. If I bump the scale up to 56, it explodes the image onto two pages, and squashes it. Any ideas?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • DNS/Nameserver issue. Can't ping IP or domain

    - by Tar
    I get this when I ping an IP: 21:31:50.136623 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 1, length 64 21:31:51.136138 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 2, length 64 21:31:52.136118 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 3, length 64 21:31:53.136129 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 4, length 64 21:31:54.136102 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 5, length 64 21:31:55.136153 IP SITE_IP > 173.194.33.4: ICMP echo request, id 14941, seq 6, length 64 and when I ping a domain: 21:29:33.631583 IP 74.125.189.19.52085 > SITE_IP.domain: 28952 A? google.com.MY_DOMAIN. (42) 21:29:38.626553 IP SITE_IP.42280 > 8.8.4.4.domain: 52435+ A? google.com.MY_DOMAIN. (42) 21:29:38.652675 IP 74.125.189.22.63658 > SITE_IP.domain: 36178 A? google.com.MY_DOMAIN. (42) 21:29:43.631626 IP SITE_IP.48205 > 8.8.8.8.domain: 52435+ A? google.com.MY_DOMAIN. (42) The pinging of a domain is what worries me, because it looks like it is checking my DNS files for the resolution. Here is etc/resolv.conf nameserver 8.8.8.8 nameserver 8.8.4.4 /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 SITE_iP server.MY_DOMAIN.com server Will also add that I am seeing a number of 'SERVFAIL'.. I have no idea what could be causing this problem. If there is any other information I need to provide, let me know. I'm using CentOS.

    Read the article

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • What should I use to ping multiple IPs and get notified of time outs?

    - by HumanVirus
    I've been using MultiPing to ping hundreds of IPs (from access points and such) and check their performance (packet loss, latency) and uptime. The program is very easy to use, but I was wondering if someone could recommend me something that would work better and that would also work in Linux. The features I'm looking for are: Notification Types: At least desktop notifications and SMS, but it would be great if it also had e-mail, IM, or other types of notifications. (MultiPing has some of these, but they don't work too well.) Being notified about the root problem only: Since some devices are dependent on others, I'd like to be notified only about the root problem. E.g. Let's say I have A[x.x.x.222]B[x.x.x.33C[x.x.x.44]D[x.x.x.55], and B goes down, therefore C and D will also be down. Is it possible to get a notification only about B being down? Light on resources. Ideally multiplatform or at least available for both Linux and Windows. I've heard about Nagios and Shinken being used for monitoring. Would you recommend that I use something of the sort or would that be too much for my needs? If using Nagios, Shinken, or similar software is recommended, can anyone tell me what sites I should go to or what books I should get that would be good for someone who is totally new at this? I'd appreciate any suggestions.

    Read the article

  • SSH very slow when connecting from external [closed]

    - by wnstnsmth
    Possible Duplicate: ssh delay when connecting We have a CentOS server that we use for internal testing purposes, which has sshd enabled. When I (as a developer) am at the company, I use ssh [email protected].55.131 to connect to it - and it works flawlessly. Now, in order to work from home, accessing the server via the company's static IP, we set up another port for ssh, 2020. So I execute ssh -p 2020 [email protected] and am immediately granted for a password. After entering the password, it takes up to 30 seconds until I can access the server. Same is with SFTP (i.e. uploading files takes about 30 seconds until it begins to transfer). As you can imagine, if you have to regularly upload files to a webserver via SFTP, this is very tedious. So I looked at similar questions and thus edited the sshd_config file on the server, setting UseDNS to "no" and GSSAPIAuthentication to "no" (this one also in ssh_config on the client) - it did not work.. Please have a look at the -vvv output when externally accessing the server: ssh -p 2020 -vvv [email protected] PasteBin: ssh What could it be? Do you need more info?

    Read the article

  • Internal Server Error with mod_wsgi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Internal Server Error with mod_wgsi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Error when Eclipse started and now my package explorer is empty!

    - by carpenteri
    Friends, Just a quick introduction, I'm currently learning Java, using a combination of the Head First Java book and Eclipse. Everything was going well until tonight! When I started up Eclipse tonight, I saw an error message which I didn't pay attention to (I know! I know!) and acknowledged after which the project explorer was empty where it used to contain my Head First project! After a quick "google" I found the workspace.metadata.log and the errors are shown below. The version of Eclipse I am using is: 20100218-1602 and the only plugin that I use is egit. Any help would be much appreciated. Thanks !SESSION 2010-06-08 19:24:33.841 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_22 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.ui.workbench 4 2 2010-06-08 19:24:36.475 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench". !STACK 1 org.eclipse.ui.WorkbenchException: Content is not allowed in prolog. at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:121) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) ... 6 more !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:41.442 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'OpenTypeHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 6 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:41.442 !MESSAGE Problems reading information from XML 'OpenTypeHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:50.435 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'QualifiedTypeNameHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 25 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:50.435 !MESSAGE Problems reading information from XML 'QualifiedTypeNameHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • Problem deploying portlets on JBoss Portal 2.7.2: Not a canonical value

    - by Dee
    I just downloaded the JBoss Portal Server 2.7.2 (JBoss Portal + JBoss AS 4.2.3 bundle to be precise) and tried deploying portlets just as the SimpleHelloWorld provided in the samples. The portlet gets deployed fine but when I put it on a page I get the following exception. I tried adding other Portlets as well (such as the booking MVC portelt supplied with Spring WebFlow dist) but the same problem happens. The problem happens when the new instances are created by me, example when i create a new instance of CMS Portlet, I get the same error. If I use an existing instance it works. If I deploy a portlet that creates an instance using the "portle-instances.xml" then it works fine, but creating additional instances using the Admin and deploying them on page fails due to the following error. What am I doing wrong? Can anyone please help? javax.servlet.ServletException: java.lang.IllegalArgumentException: Not a canonical value SimplestHelloWorldWindow org.jboss.portal.server.servlet.PortalServlet.service(PortalServlet.java:278) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) root cause java.lang.IllegalArgumentException: Not a canonical value SimplestHelloWorldWindow org.jboss.portal.core.model.portal.PortalObjectPath$CanonicalFormat.parse(PortalObjectPath.java:357) org.jboss.portal.core.model.portal.PortalObjectPath.<init>(PortalObjectPath.java:161) org.jboss.portal.core.model.portal.PortalObjectPath.parse(PortalObjectPath.java:314) org.jboss.portal.core.model.portal.PortalObjectId.parse(PortalObjectId.java:158) org.jboss.portal.core.model.portal.PortalObjectId.parse(PortalObjectId.java:143) org.jboss.portal.core.model.portal.navstate.PortalObjectNavigationalStateContext.createWindowKey(PortalObjectNavigationalStateContext.java:299) org.jboss.portal.core.model.portal.navstate.PortalObjectNavigationalStateContext.getWindowNavigationalState(PortalObjectNavigationalStateContext.java:194) org.jboss.portal.core.controller.portlet.ControllerPageNavigationalState.getPortletWindowNavigationalState(ControllerPageNavigationalState.java:230) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.getPortletNavigationalState(RenderWindowCommand.java:121) org.jboss.portal.core.impl.model.content.InternalContentProvider.renderWindow(InternalContentProvider.java:211) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.execute(RenderWindowCommand.java:100) org.jboss.portal.core.controller.ControllerCommand$1.invoke(ControllerCommand.java:68) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.aspects.controller.node.EventBroadcasterInterceptor.invoke(EventBroadcasterInterceptor.java:124) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PageCustomizerInterceptor.invoke(PageCustomizerInterceptor.java:134) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PolicyEnforcementInterceptor.invoke(PolicyEnforcementInterceptor.java:78) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.node.PortalNodeInterceptor.invoke(PortalNodeInterceptor.java:81) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.BackwardCompatibilityInterceptor.invoke(BackwardCompatibilityInterceptor.java:48) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ControlInterceptor.invoke(ControlInterceptor.java:56) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.NavigationalStateInterceptor.invoke(NavigationalStateInterceptor.java:42) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.controller.ajax.AjaxInterceptor.invoke(AjaxInterceptor.java:55) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ResourceAcquisitionInterceptor.invoke(ResourceAcquisitionInterceptor.java:50) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.core.controller.ControllerContext.execute(ControllerContext.java:134) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.render(RenderWindowCommand.java:80) org.jboss.portal.core.model.portal.command.render.RenderPageCommand.execute(RenderPageCommand.java:222) org.jboss.portal.core.controller.ControllerCommand$1.invoke(ControllerCommand.java:68) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.aspects.controller.node.EventBroadcasterInterceptor.invoke(EventBroadcasterInterceptor.java:124) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PageCustomizerInterceptor.invoke(PageCustomizerInterceptor.java:134) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PolicyEnforcementInterceptor.invoke(PolicyEnforcementInterceptor.java:78) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.node.PortalNodeInterceptor.invoke(PortalNodeInterceptor.java:81) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.BackwardCompatibilityInterceptor.invoke(BackwardCompatibilityInterceptor.java:48) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ControlInterceptor.invoke(ControlInterceptor.java:56) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.NavigationalStateInterceptor.invoke(NavigationalStateInterceptor.java:42) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.controller.ajax.AjaxInterceptor.invoke(AjaxInterceptor.java:55) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ResourceAcquisitionInterceptor.invoke(ResourceAcquisitionInterceptor.java:50) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.core.controller.ControllerContext.execute(ControllerContext.java:134) org.jboss.portal.core.model.portal.PortalObjectResponseHandler.processCommandResponse(PortalObjectResponseHandler.java:80) org.jboss.portal.core.controller.classic.ClassicResponseHandler.processHandlers(ClassicResponseHandler.java:78) org.jboss.portal.core.controller.classic.ClassicResponseHandler.processCommandResponse(ClassicResponseHandler.java:53) org.jboss.portal.core.controller.handler.ResponseHandlerSelector.processCommandResponse(ResponseHandlerSelector.java:70) org.jboss.portal.core.controller.Controller.processCommandResponse(Controller.java:315) org.jboss.portal.core.controller.Controller.processCommand(Controller.java:303) org.jboss.portal.core.controller.Controller.handle(Controller.java:261) org.jboss.portal.server.RequestControllerDispatcher.invoke(RequestControllerDispatcher.java:51) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.cms.aspect.IdentityBindingInterceptor.invoke(IdentityBindingInterceptor.java:47) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.server.ContentTypeInterceptor.invoke(ContentTypeInterceptor.java:68) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.PortalContextPathInterceptor.invoke(PortalContextPathInterceptor.java:45) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.LocaleInterceptor.invoke(LocaleInterceptor.java:96) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.UserInterceptor.invoke(UserInterceptor.java:196) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.server.SignOutInterceptor.invoke(SignOutInterceptor.java:98) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.impl.api.user.UserEventBridgeTriggerInterceptor.invoke(UserEventBridgeTriggerInterceptor.java:65) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.IdentityCacheInterceptor.invoke(IdentityCacheInterceptor.java:68) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.TransactionInterceptor.org$jboss$portal$core$aspects$server$TransactionInterceptor$invoke$aop(TransactionInterceptor.java:49) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:79) org.jboss.aspects.tx.TxInterceptor$RequiresNew.invoke(TxInterceptor.java:253) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:79) org.jboss.aspects.tx.TxInterceptor$RequiresNew.invoke(TxInterceptor.java:262) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.portal.core.aspects.server.TransactionInterceptor.invoke(TransactionInterceptor.java) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.LockInterceptor$InternalLock.invoke(LockInterceptor.java:69) org.jboss.portal.server.aspects.LockInterceptor.invoke(LockInterceptor.java:130) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.server.servlet.PortalServlet.service(PortalServlet.java:252) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Setting custom UITableViewCells height

    - by Vijayeta
    I am using a custum UITableViewCell which has some labels, buttons and imageviews to be displayed. There is one label in the cell whose text is a NSString object and the length of string could be variable. Due to this, I cannot set a constant height to the cell in the UITableViews: heightForCellAtIndex method. The cell's height depends on the labels height which can be determined using the NSStrings sizeWithFont method. I tried using it, but it looks like I'm going wrong somewhere. How can it be fixed? Here is the code used for initializing the cell. if (self = [super initWithFrame:frame reuseIdentifier:reuseIdentifier]) { self.selectionStyle = UITableViewCellSelectionStyleNone; UIImage *image = [UIImage imageNamed:@"dot.png"]; imageView = [[UIImageView alloc] initWithImage:image]; imageView.frame = CGRectMake(45.0,10.0,10,10); headingTxt = [[UILabel alloc] initWithFrame: CGRectMake(60.0,0.0,150.0,post_hdg_ht)]; [headingTxt setContentMode: UIViewContentModeCenter]; headingTxt.text = postData.user_f_name; headingTxt.font = [UIFont boldSystemFontOfSize:13]; headingTxt.textAlignment = UITextAlignmentLeft; headingTxt.textColor = [UIColor blackColor]; dateTxt = [[UILabel alloc] initWithFrame:CGRectMake(55.0,23.0,150.0,post_date_ht)]; dateTxt.text = postData.created_dtm; dateTxt.font = [UIFont italicSystemFontOfSize:11]; dateTxt.textAlignment = UITextAlignmentLeft; dateTxt.textColor = [UIColor grayColor]; NSString * text1 = postData.post_body; NSLog(@"text length = %d",[text1 length]); CGRect bounds = [UIScreen mainScreen].bounds; CGFloat tableViewWidth; CGFloat width = 0; tableViewWidth = bounds.size.width/2; width = tableViewWidth - 40; //fudge factor //CGSize textSize = {width, 20000.0f}; //width and height of text area CGSize textSize = {245.0, 20000.0f}; //width and height of text area CGSize size1 = [text1 sizeWithFont:[UIFont systemFontOfSize:11.0f] constrainedToSize:textSize lineBreakMode:UILineBreakModeWordWrap]; CGFloat ht = MAX(size1.height, 28); textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; textView.text = postData.post_body; textView.font = [UIFont systemFontOfSize:11]; textView.textAlignment = UITextAlignmentLeft; textView.textColor = [UIColor blackColor]; textView.lineBreakMode = UILineBreakModeWordWrap; textView.numberOfLines = 3; textView.autoresizesSubviews = YES; [self.contentView addSubview:imageView]; [self.contentView addSubview:textView]; [self.contentView addSubview:webView]; [self.contentView addSubview:dateTxt]; [self.contentView addSubview:headingTxt]; [self.contentView sizeToFit]; [imageView release]; [textView release]; [webView release]; [dateTxt release]; [headingTxt release]; } textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; this is the label whose height and width are going wrong.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >