Search Results

Search found 1036 results on 42 pages for 'anti'.

Page 42/42 | < Previous Page | 38 39 40 41 42 

  • Spam Assassin on windows

    - by ebeworld
    I just installed spam assassin and run for its sample ham mail as spamassassin sample-nonspam.txt, but it ended up marking it as a spam. What configuration am i missing to change? Result of the check is: From: Keith Dawson To: [email protected] Subject: **SPAM** TBTF ping for 2001-04-20: Reviving Date: Fri, 20 Apr 2001 16:59:58 -0400 Message-Id: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on ebeworld-PC X-Spam-Level: **** X-Spam-Status: Yes, score=10.5 required=6.3 tests=DCC_CHECK,DIGEST_MULTIPLE, DNS_FROM_OPENWHOIS,RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E4_51_100, RAZOR2_CHECK shortcircuit=no autolearn=no version=3.2.3 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----------=_4BF17E8E.BF8E0000" This is a multi-part message in MIME format. ------------=_4BF17E8E.BF8E0000 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit This mail is probably spam. The original message has been attached intact in RFC 822 format. Content preview: -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t [...] Content analysis details: (10.5 points, 6.3 required) 2.4 DNS_FROM_OPENWHOIS RBL: Envelope sender listed in bl.open-whois.org. 1.5 RAZOR2_CF_RANGE_E4_51_100 Razor2 gives engine 4 confidence level above 50% [cf: 58] 2.5 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 58] 3.6 DCC_CHECK Listed in DCC (http://rhyolite.com/anti-spam/dcc/) 0.0 DIGEST_MULTIPLE Message hits more than one network digest check ------------=_4BF17E8E.BF8E0000 Content-Type: message/rfc822; x-spam-type=original Content-Description: original message before SpamAssassin Content-Disposition: inline Content-Transfer-Encoding: 8bit Return-Path: Delivered-To: [email protected] Received: from europe.std.com (europe.std.com [199.172.62.20]) by mail.netnoteinc.com (Postfix) with ESMTP id 392E1114061 for ; Fri, 20 Apr 2001 21:34:46 +0000 (Eire) Received: (from daemon@localhost) by europe.std.com (8.9.3/8.9.3) id RAA09630 for tbtf-outgoing; Fri, 20 Apr 2001 17:31:18 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA08749 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8278330 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: (from dawson@localhost) by world.std.com (8.9.3/8.9.3) id RAA26781 for [email protected]; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA07541 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8416421 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from [208.192.102.193] (ppp0c199.std.com [208.192.102.199]) by world.std.com (8.9.3/8.9.3) with ESMTP id RAA14226 for ; Fri, 20 Apr 2001 17:12:04 -0400 (EDT) Mime-Version: 1.0 Message-Id: Date: Fri, 20 Apr 2001 16:59:58 -0400 To: [email protected] From: Keith Dawson Subject: TBTF ping for 2001-04-20: Reviving Content-Type: text/plain; charset="us-ascii" Sender: [email protected] Precedence: list Reply-To: [email protected] -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t Timely news of the bellwethers in computer and communications technology that will affect electronic commerce -- since 1994 Your Host: Keith Dawson ISSN: 1524-9948 This issue: < http://tbtf.com/archive/2001-04-20.html > To comment on this issue, please use this forum at Quick Topic: < http://www.quicktopic.com/tbtf/H/kQGJR2TXL6H > ________________________________________________________________________ Q u o t e O f T h e M o m e n t Even organizations that promise "privacy for their customers" rarely if ever promise "continued privacy for their former customers..." Once you cancel your account with any business, their promises of keeping the information about their customers private no longer apply... you're not a customer any longer. This is in the large category of business behaviors that individuals would consider immoral and deceptive -- and businesses know are not illegal. -- "_ankh," writing on the XNStalk mailing list ________________________________________________________________________ ..TBTF's long hiatus is drawing to a close Hail subscribers to the TBTF mailing list. Some 2,000 [1] of you have signed up since the last issue [2] was mailed on 2000-07-20. This brief note is the first of several I will send to this list to excise the dead addresses prior to resuming regular publication. While you time the contractions of the newsletter's rebirth, I in- vite you to read the TBTF Log [3] and sign up for its separate free subscription. Send "subscribe" (no quotes) with any subject to [email protected] . I mail out collected Log items on Sun- days. If you need to stay more immediately on top of breaking stories, pick up the TBTF Log's syndication file [4] or read an aggregator that does. Examples are Slashdot's Cheesy Portal [5], Userland [6], and Sitescooper [7]. If your news obsession runs even deeper and you own an SMS-capable cell phone or PDA, sign up on TBTF's WebWire- lessNow portal [8]. A free call will bring you the latest TBTF Log headline, Jargon Scout [9] find, or Siliconium [10]. Two new columnists have bloomed on TBTF since last summer: Ted By- field's roving_reporter [11] and Gary Stock's UnBlinking [12]. Late- ly Byfield has been writing in unmatched depth about ICANN, but the roving_reporter nym's roots are in commentary at the intersection of technology and culture. Stock's UnBlinking latches onto topical sub- jects and pursues them to the ends of the Net. These writers' voices are compelling and utterly distinctive. [1] http://tbtf.com/growth.html [2] http://tbtf.com/archive/2000-07-20.html [3] http://tbtf.com/blog/ [4] http://tbtf.com/tbtf.rdf [5] http://www.slashdot.org/cheesyportal.shtml [6] http://my.userland.com/ [7] http://www.sitescooper.org/ [8] http://tbtf.com/pull-wwn/ [9] http://tbtf.com/jargon-scout.html [10] http://tbtf.com/siliconia.html [11] http://tbtf.com/roving_reporter/ [12] http://tbtf.com/unblinking/ ________________________________________________________________________ S o u r c e s For a complete list of TBTF's email and Web sources, see http://tbtf.com/sources.html . ________________________________________ B e n e f a c t o r s TBTF is free. If you get value from this publication, please visit the TBTF Benefactors page < http://tbtf.com/the-benefactors.html > and consider contributing to its upkeep. ________________________________________________________________________ TBTF home and archive at http://tbtf.com/ . To unsubscribe send the message "unsubscribe" to [email protected]. TBTF is Copy- right 1994-2000 by Keith Dawson, <[email protected]>. Commercial use prohibited. For non-commercial purposes please forward, post, and link as you see fit. _______________________________________________ Keith Dawson [email protected] Layer of ash separates morning and evening milk. -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 6.5.2 for non-commercial use http://www.pgp.com iQCVAwUBOuCi3WAMawgf2iXRAQHeAQQA3YSePSQ0XzdHZUVskFDkTfpE9XS4fHQs WaT6a8qLZK9PdNcoz3zggM/Jnjdx6CJqNzxPEtxk9B2DoGll/C/60HWNPN+VujDu Xav65S0P+Px4knaQcCIeCamQJ7uGcsw+CqMpNbxWYaTYmjAfkbKH1EuLC2VRwdmD wQmwrDp70v8= =8hLB -----END PGP SIGNATURE----- ------------=_4BF17E8E.BF8E0000--

    Read the article

  • Spring security request matcher is not working with regex

    - by Felipe Cardoso Martins
    Using Spring MVC + Security I have a business requirement that the users from SEC (Security team) has full access to the application and FRAUD (Anti-fraud team) has only access to the pages that URL not contains the words "block" or "update" with case insensitive. Bellow, all spring dependencies: $ mvn dependency:tree | grep spring [INFO] +- org.springframework:spring-webmvc:jar:3.1.2.RELEASE:compile [INFO] | +- org.springframework:spring-asm:jar:3.1.2.RELEASE:compile [INFO] | +- org.springframework:spring-beans:jar:3.1.2.RELEASE:compile [INFO] | +- org.springframework:spring-context:jar:3.1.2.RELEASE:compile [INFO] | +- org.springframework:spring-context-support:jar:3.1.2.RELEASE:compile [INFO] | \- org.springframework:spring-expression:jar:3.1.2.RELEASE:compile [INFO] +- org.springframework:spring-core:jar:3.1.2.RELEASE:compile [INFO] +- org.springframework:spring-web:jar:3.1.2.RELEASE:compile [INFO] +- org.springframework.security:spring-security-core:jar:3.1.2.RELEASE:compile [INFO] | \- org.springframework:spring-aop:jar:3.0.7.RELEASE:compile [INFO] +- org.springframework.security:spring-security-web:jar:3.1.2.RELEASE:compile [INFO] | +- org.springframework:spring-jdbc:jar:3.0.7.RELEASE:compile [INFO] | \- org.springframework:spring-tx:jar:3.0.7.RELEASE:compile [INFO] +- org.springframework.security:spring-security-config:jar:3.1.2.RELEASE:compile [INFO] +- org.springframework.security:spring-security-acl:jar:3.1.2.RELEASE:compile Bellow, some examples of mapped URL path from spring log: Mapped URL path [/index] onto handler 'homeController' Mapped URL path [/index.*] onto handler 'homeController' Mapped URL path [/index/] onto handler 'homeController' Mapped URL path [/cellphone/block] onto handler 'cellphoneController' Mapped URL path [/cellphone/block.*] onto handler 'cellphoneController' Mapped URL path [/cellphone/block/] onto handler 'cellphoneController' Mapped URL path [/cellphone/confirmBlock] onto handler 'cellphoneController' Mapped URL path [/cellphone/confirmBlock.*] onto handler 'cellphoneController' Mapped URL path [/cellphone/confirmBlock/] onto handler 'cellphoneController' Mapped URL path [/user/update] onto handler 'userController' Mapped URL path [/user/update.*] onto handler 'userController' Mapped URL path [/user/update/] onto handler 'userController' Mapped URL path [/user/index] onto handler 'userController' Mapped URL path [/user/index.*] onto handler 'userController' Mapped URL path [/user/index/] onto handler 'userController' Mapped URL path [/search] onto handler 'searchController' Mapped URL path [/search.*] onto handler 'searchController' Mapped URL path [/search/] onto handler 'searchController' Mapped URL path [/doSearch] onto handler 'searchController' Mapped URL path [/doSearch.*] onto handler 'searchController' Mapped URL path [/doSearch/] onto handler 'searchController' Bellow, a test of the regular expressions used in spring-security.xml (I'm not a regex speciality, improvements are welcome =]): import java.util.Arrays; import java.util.List; public class RegexTest { public static void main(String[] args) { List<String> pathSamples = Arrays.asList( "/index", "/index.*", "/index/", "/cellphone/block", "/cellphone/block.*", "/cellphone/block/", "/cellphone/confirmBlock", "/cellphone/confirmBlock.*", "/cellphone/confirmBlock/", "/user/update", "/user/update.*", "/user/update/", "/user/index", "/user/index.*", "/user/index/", "/search", "/search.*", "/search/", "/doSearch", "/doSearch.*", "/doSearch/"); for (String pathSample : pathSamples) { System.out.println("Path sample: " + pathSample + " - SEC: " + pathSample.matches("^.*$") + " | FRAUD: " + pathSample.matches("^(?!.*(?i)(block|update)).*$")); } } } Bellow, the console result of Java class above: Path sample: /index - SEC: true | FRAUD: true Path sample: /index.* - SEC: true | FRAUD: true Path sample: /index/ - SEC: true | FRAUD: true Path sample: /cellphone/block - SEC: true | FRAUD: false Path sample: /cellphone/block.* - SEC: true | FRAUD: false Path sample: /cellphone/block/ - SEC: true | FRAUD: false Path sample: /cellphone/confirmBlock - SEC: true | FRAUD: false Path sample: /cellphone/confirmBlock.* - SEC: true | FRAUD: false Path sample: /cellphone/confirmBlock/ - SEC: true | FRAUD: false Path sample: /user/update - SEC: true | FRAUD: false Path sample: /user/update.* - SEC: true | FRAUD: false Path sample: /user/update/ - SEC: true | FRAUD: false Path sample: /user/index - SEC: true | FRAUD: true Path sample: /user/index.* - SEC: true | FRAUD: true Path sample: /user/index/ - SEC: true | FRAUD: true Path sample: /search - SEC: true | FRAUD: true Path sample: /search.* - SEC: true | FRAUD: true Path sample: /search/ - SEC: true | FRAUD: true Path sample: /doSearch - SEC: true | FRAUD: true Path sample: /doSearch.* - SEC: true | FRAUD: true Path sample: /doSearch/ - SEC: true | FRAUD: true Tests Scenario 1 Bellow, the important part of spring-security.xml: <security:http entry-point-ref="entryPoint" request-matcher="regex"> <security:intercept-url pattern="^.*$" access="ROLE_SEC" /> <security:intercept-url pattern="^(?!.*(?i)(block|update)).*$" access="ROLE_FRAUD" /> <security:access-denied-handler error-page="/access-denied.html" /> <security:form-login always-use-default-target="false" login-processing-url="/doLogin.html" authentication-failure-handler-ref="authFailHandler" authentication-success-handler-ref="authSuccessHandler" /> <security:logout logout-url="/logout.html" success-handler-ref="logoutSuccessHandler" /> </security:http> Behaviour: FRAUD group **can't" access any page SEC group works fine Scenario 2 NOTE that I only changed the order of intercept-url in spring-security.xml bellow: <security:http entry-point-ref="entryPoint" request-matcher="regex"> <security:intercept-url pattern="^(?!.*(?i)(block|update)).*$" access="ROLE_FRAUD" /> <security:intercept-url pattern="^.*$" access="ROLE_SEC" /> <security:access-denied-handler error-page="/access-denied.html" /> <security:form-login always-use-default-target="false" login-processing-url="/doLogin.html" authentication-failure-handler-ref="authFailHandler" authentication-success-handler-ref="authSuccessHandler" /> <security:logout logout-url="/logout.html" success-handler-ref="logoutSuccessHandler" /> </security:http> Behaviour: SEC group **can't" access any page FRAUD group works fine Conclusion I did something wrong or spring-security have a bug. The problem already was solved in a very bad way, but I need to fix it quickly. Anyone knows some tricks to debug better it without open the frameworks code? Cheers, Felipe

    Read the article

  • Job conditions conflicting with personal principles on software-development - how much is too much?

    - by Baelnorn
    Sorry for the incoming wall'o'text (and for my probably bad English) but I just need to get this off somehow. I also accept that this question will be probably closed as subjective and argumentative, but I need to know one thing: "how much BS are programmers supposed to put up with before breaking?" My background I'm 27 years old and have a B.Sc. in Computer engineering with a graduation grade of 1.8 from a university of applied science. I went looking for a job right after graduation. I got three offers right away, with two offers paying vastly more than the last one, but that last one seemed more interesting so I went for that. My situation I've been working for the company now for 17 months now, but it feels like a drag more and more each day. Primarily because the company (which has only 5 other developers but me, and of these I work with 4) turned out to be pretty much the anti-thesis of what I expected (and was taught in university) from a modern software company. I agreed to accept less than half of the usual payment appropriate for my qualification for the first year because I was promised a trainee program. However, the trainee program turned out to be "here you got a computer, there's some links on the stuff we use, and now do what you colleagues tell you". Further, during my whole time there (trainee or not) I haven't been given the grace of even a single code-review - apparently nobody's interested in my work as long as it "just works". I was told in the job interview that "Microsoft technology played a central role in the company" yet I've been slowly eroding my congnitive functions with Flex/Actionscript/Cairngorm ever since I started (despite having applied as a C#/.NET developer). Actually, the company's primary projects are based on Java/XSLT and Flex/Actionscript (with some SAP/ABAP stuff here and there but I'm not involved in that) and they've been working on these before I even applied. Having had no experience either with that particular technology nor the framework nor the field (RIA) nor in developing business scale applications I obviously made several mistakes. However, my boss told me that he let me make those mistakes (which ate at least 2 months of development time on their own) on purpose to provide some "learning experience". Even when I was still a trainee I was already tasked with working on a business-critical application. On my own. Without supervision. Without code-reviews. My boss thinks agile methods are a waste of time/money and deems putting more than one developer on any project not efficient. Documentation is not necessary and each developer should only document what he himself needs for his work. Recently he wanted us to do bug tracking with Excel and Email instead of using an already existing Bugzilla, overriding an unanimous decision made by all developers and testers involved in the process - only after another senior developer had another hour-long private discussion with him he agreed to let us use the bugtracker. Project management is basically not present, there are only a few Excel sheets floating around where the senior developer lists some things (not all, mind you) with a time estimate ranging from days to months, trying to at least somehow organize the whole mess. A development process is also basically not present, each developer just works on his own however he wants. There are not even coding conventions in the company. Testing is done manually with a single tester (sometimes two testers) per project because automated testing wasn't given the least thought when the whole project was started. I guess it's not a big surprise when I say that each developer also has his own share of hundreds of overhours (which are, of course, unpaid). Each developer is tasked with working on his own project(s) which in turn leads to a very extensive knowledge monopolization - if one developer was to have an accident or become ill there would be absolutely no one who could even hope to do his work. Considering that each developer has his own business-critical application to work on, I guess that's a pretty bad situation. I've been trying to change things for the better. I tried to introduce a development process, but my first attempt was pretty much shot down by my boss with "I don't want to discuss agile methods". After that I put together a process that at least resembled how most of the developers were already working and then include stuff like automated (or at least organized) testing, coding conventions, etc. However, this was also shot down because it wasn't "simple" enought to be shown on a business slide (actually, I wasn't even given the 15 minutes I'd have needed to present the process in the meeting). My problem I can't stand working there any longer. Seriously, I consider to resign on monday, which still leaves me with 3 months to work there due to the cancelation period. My primary goal since I started studying computer science was being a good computer scientist, working with modern technologies and adhering to modern and proven principles and methods. However, the company I'm working for seems to make that impossible. Some days I feel as if was living in a perverted real-life version of the Dilbert comics. My question Am I overreacting? Is this the reality each graduate from university has to face? Should I betray my sound principles and just accept these working conditions? Or should I gtfo of there? What's the opinion of other developers on this matter. Would you put up with all that stuff?

    Read the article

  • no Jquery only Javascrip fadein function with cleartype fot text

    - by Chetan
    Sorry guys.. I am not familiar with JavaScrip anymore but I need to add clear type of some thing which can make text anti-aliased in IE7 browser. My script as follow // JavaScript Document var CurrentDivIndex=0; var TimeOutValue; var btn; var TimeToFade = 1000.0; function ShowDivSlideShow() { try { if(CurrentDivIndex == 5) CurrentDivIndex=0; CurrentDivIndex++; //alert("Banner" + CurrentDivIndex); //alert(CurrentDivIndex); var Indexer=1; while(Indexer<6) { var DivToShow=document.getElementById("Banner" + Indexer); DivToShow.style.display = "none"; btn=document.getElementById("btnb" + Indexer); btn.setAttribute("class","none"); Indexer++; } var DivToShow=document.getElementById("Banner" + CurrentDivIndex); DivToShow.style.display = "block"; btn=document.getElementById("btnb" + CurrentDivIndex); btn.setAttribute("class","activeSlide"); // btn.className="activeSlide"; fadeIn(); TimeOutValue=setTimeout("ShowDivSlideShow()",6000); } catch(err) { alert(err) } } function ShowCustomDiv(CurrentDivIndexRec) { clearTimeout(TimeOutValue) CurrentDivIndex=CurrentDivIndexRec var Indexer=1; while(Indexer<6) { if(CurrentDivIndex==Indexer) { Indexer++; continue; } var DivToShow=document.getElementById("Banner" + Indexer); DivToShow.style.display = "none"; btn=document.getElementById("btnb" + Indexer); btn.setAttribute("class","none"); Indexer++; } var DivToShow=document.getElementById("Banner" + CurrentDivIndex); DivToShow.style.display = "block"; btn=document.getElementById("btnb" + CurrentDivIndex); btn.setAttribute("class","activeSlide"); btn.className="activeSlide" fadeIn(); } function ShowDivSlideShowWithTimeOut(CurrentDivIndexRec) { clearTimeout(TimeOutValue) CurrentDivIndex=CurrentDivIndexRec; var Indexer=1; while(Indexer<6) { if(CurrentDivIndex==Indexer) { Indexer++; continue; } var DivToShow=document.getElementById("Banner" + Indexer); DivToShow.style.display = "none"; btn=document.getElementById("btnb" + Indexer); btn.setAttribute("class","none"); Indexer++; } var DivToShow=document.getElementById("Banner" + CurrentDivIndexRec); DivToShow.style.display = "block"; btn=document.getElementById("btnb" + CurrentDivIndexRec); btn.setAttribute("class","activeSlide"); TimeOutValue=setTimeout("ShowDivSlideShow()",6000); } function ShowCustomDivOnClick(CurrentDivIndexRec) { clearTimeout(TimeOutValue) CurrentDivIndex=CurrentDivIndexRec; var Indexer=1; while(Indexer<6) { if(CurrentDivIndex==Indexer) { Indexer++; continue; } var DivToShow=document.getElementById("Banner" + Indexer); DivToShow.style.display = "none"; btn=document.getElementById("btnb" + Indexer); btn.setAttribute("class","none"); Indexer++; } var DivToShow=document.getElementById("Banner" + CurrentDivIndexRec); DivToShow.style.display = "block"; btn=document.getElementById("btnb" + CurrentDivIndexRec); btn.setAttribute("class","activeSlide"); fadeIn(); TimeOutValue=setTimeout("ShowDivSlideShow()",6000); } function setOpacity(level) { element=document.getElementById("Banner" + CurrentDivIndex); element.style.opacity = level; element.style.MozOpacity = level; element.style.KhtmlOpacity = level; element.style.filter = "alpha(opacity=" + (level * 100) + ");"; } var duration = 300; /* 1000 millisecond fade = 1 sec */ var steps = 10; /* number of opacity intervals */ var delay = 6000; /* 5 sec delay before fading out */ function fadeIn(){ for (i = 0; i <= 1; i += (1 / steps)) { setTimeout("setOpacity(" + i + ")", i * duration); } // setTimeout("fadeOut()", delay); } function fadeOut() { for (i = 0; i <= 1; i += (1 / steps)) { setTimeout("setOpacity(" + (1 - i) + ")", i * duration); } setTimeout("fadeIn()", duration); } //end of script Now I am very confused where to add : $('#slideshow').cycle({ cleartype: 1 // enable cleartype corrections }); or $('#fadingElement').fadeIn(2000, function(){ $(this).css('filter',''); }); so it will work... Please Help me...

    Read the article

  • Something like a manual refresh is needed angularjs, and a $digest() iterations error

    - by Tony Ennis
    (post edited again, new comments follow this line) I'm changing the title of this posting since it was misleading - I was trying to fix a symptom. I was unable to figure out why the code was breaking with a $digest() iterations error. A plunk of my code worked fine. I was totally stuck, so I decided to make my code a little more Angular-like. One anti-pattern I had implemented was to hide my model behind my controller by adding getters/setters to the controller. I tore all that out and instead put the model into the $scope since I had read that was proper Angular. To my surprise, the $digest() iterations error went away. I do not exactly know why and I do not have the intestinal fortitude to put the old code back and figure it out. I surmise that by involving the controller in the get/put of the data I added a dependency under the hood. I do not understand it. edit #2 ends here. (post edited, see EDIT below) I was working through my first Error: 10 $digest() iterations reached. Aborting! error today. I solved it this way: <div ng-init="lineItems = ctrl.getLineItems()"> <tr ng-repeat="r in lineItems"> <td>{{r.text}}</td> <td>...</td> <td>{{r.price | currency}}</td> </tr </div> Now a new issue has arisen - the line items I'm producing can be modified by another control on the page. It's a text box for a promo code. The promo code adds a discount to the lineItem array. It would show up if I could ng-repeat over ctrl.getLineItems(). Since the ng-repeat is looking at a static variable, not the actual model, it doesn't see that the real line items have changed and thus the promotional discount doesn't get displayed until I refresh the browser. Here's the HTML for the promo code: <input type="text" name="promo" ng-model="ctrl.promoCode"/> <button ng-click="ctrl.applyPromoCode()">apply promo code</button> The input tag is writing the value to the model. The bg-click in the button is invoking a function that will apply the code. This could change the data behind the lineItems. I have been advised to use $scope.apply(...). However, since this is applied as a matter of course by ng-click is isn't going to do anything. Indeed, if I add it to ctrl.applyPromoCode(), I get an error since an .apply() is already in progress. I'm at a loss. EDIT The issue above is probably the result of me fixing of symptom, not a problem. Here is the original HTML that was dying with the 10 $digest() iterations error. <table> <tr ng-repeat="r in ctrl.getLineItems()"> <td>{{r.text}}</td> <td>...</td> <td>{{r.price | currency}}</td> </tr> </table> The ctrl.getLineItems() function doesn't do much but invoke a model. I decided to keep the model out of the HTML as much as I could. this.getLineItems = function() { var total = 0; this.lineItems = []; this.lineItems.push({text:"Your quilt will be "+sizes[this.size].block_size+" squares", price:sizes[this.size].price}); total = sizes[this.size].price; this.lineItems.push({text: threads[this.thread].narrative, price:threads[this.thread].price}); total = total + threads[this.thread].price; if (this.sashing) { this.lineItems.push({text:"Add sashing", price: this.getSashingPrice()}); total = total + sizes[this.size].sashing; } else { this.lineItems.push({text:"No sashing", price:0}); } if(isNaN(this.promo)) { this.lineItems.push({text:"No promo code", price:0}); } else { this.lineItems.push({text:"Promo code", price: promos[this.promo].price}); total = total + promos[this.promo].price; } this.lineItems.push({text:"Shipping", price:this.shipping}); total = total + this.shipping; this.lineItems.push({text:"Order Total", price:total}); return this.lineItems; }; And the model code assembled an array of objects based upon the items selected. I'll abbreviate the class as it croaks as long as the array has a row. function OrderModel() { this.lineItems = []; // Result of the lineItems call ... this.getLineItems = function() { var total = 0; this.lineItems = []; ... this.lineItems.push({text:"Order Total", price:total}); return this.lineItems; }; }

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

< Previous Page | 38 39 40 41 42