Search Results

Search found 3938 results on 158 pages for 'goal'.

Page 91/158 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • where clausule on field defined by sub-query

    - by stUrb
    I have this query which displays some properties and count the number of references to it from an other table: SELECT p.id,p.propName ( SELECT COUNT(*) FROM propLoc WHERE propLoc.propID = p.id ) AS number FROM property as p WHERE p.category != 'natural' This generates a good table with all the information I want to filter: id | propName | number 3 | Name 1 | 3 4 | Name 2 | 1 5 | Name 3 | 0 6 | Name 4 | 10 etc etc I now want to filter out the properties with number <= 0 So I tried to add an AND number > 0 But it reacts with Unknown column 'number' in 'where clause' apparently you can't filter on a name specified by a subquery? How can I achieve my goal?

    Read the article

  • Are there any Android YouTube Player Flags?

    - by stormin986
    My goal is to launch a video in the YouTube player, and then return to my app. I realize I can't control what's going on in another app, BUT I'm wondering if the YouTube app supports any kind of flags or data set in the calling Intent that might support this functionality. Anyone aware of any? I cannot find any documentation on the YouTube app/activities. I had to watch the logs just to figure out which activity to call in the first place (so the choose app dialog didn't appear).

    Read the article

  • Clarifying... So Background Jobs don't Tie Up Application Resources (in Rails)?

    - by viatropos
    I'm trying to get a better grasp of the inner workings of background jobs and how they improve performance. I understand that the goal is to have the application return a response to the user as fast as it can, so you don't want to, say, parse a huge feed that would take 10 seconds because it would prevent the application from being able to process any other requests. So it's recommended to put any operations that take more than say 500ms to execute, into a queued background job. What I don't understand is, doesn't that just delay the same problem? I know the user who invoked that background job will get an immediate response, but what if another user comes right when that background job starts (and it takes 10 seconds to finish), wont that user have to wait? Or is the main issue that, requests are the only thing that can happen one-at-a-time, while on the other hand a request can start while one+ background jobs are in the middle of running? Is that correct?

    Read the article

  • PHP Math issue with negatives [closed]

    - by user1269625
    Possible Duplicate: PHP negatives keep adding I have this code here.... $remaining = 0; foreach($array as $value=>$row){ $remaining = $remaining + $row['remainingbalance']; } What its doing is that it is going through all the remaining balances in the array which are -51.75 and -17.85 with the code above I get -69.60 which is correct. But I am wondering how when its two negatives if they could subtract? Is that possible? I tried this $remaining = 0; foreach($clientArrayInvoice as $value=>$row){ $remaining = $remaining + abs($row['remainingbalance']); } but it gives me 69.60 without the negative. Anyone got any ideas? my goal is to take -51.75 and -17.85 and come up with -33.90 only when its a negative to do subtract. otherwise add

    Read the article

  • Beginner's Language app

    - by Eiseldora
    Hi I'm a techie with no programing experience. I know html and css, but I'd like to someday be able to make an app for my phone (I have an android) and possibly mobile websites. I made learning a programing language and creating a mobile app a goal for my job, and now my boss would like me to pick a programing language to learn. I found a free open course from MIT (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00-introduction-to-computer-science-and-programming-fall-2008/) called introduction to computer science. In the course they teach python, but more importantly it seems they teach how to think like a programmer. When I told my boss about the free online course she didn't think that Python was an appropriate language for me to learn. She'd like me to learn a language that is more similar to one used to make Phone apps. Does anyone out there know a better language for me to pick up that would be similar to Android or iPhone's App language. Thank you

    Read the article

  • Writing out to a file in scheme

    - by Ceelos
    The goal of this is to check if the character taken into account is a number or operand and then output it into a list which will be written out to a txt file. I'm wondering which process would be more efficient, whether to do it as I stated above (writing it to a list and then writing that list out into a file) or being writing out into a txt file right from the procedure. I'm new with scheme so I apologize if I am not using the correct terminology (define input '("3" "+" "4")) (define check (if (number? (car input)) (write this out to a list or directly to a file) (check the rest of file))) Another question I had in mind, how can I make it so that the check process is recursive? I know it's a lot of asking but I've getting a little frustrated with checking out the methods that I have found on other sites. I really appreciate the help!

    Read the article

  • Updating multiple rows with an array

    - by Copephobia
    I have a table that holds user information. One of the columns holds the position of the user in the game they are in. When a game is being created, I need to update the positions of the users of each team. Here is an example: Game id : 7 Team 1 users : 1,2 Team 2 users : 3,4 team1_position : array(1,2) team2_position : array(13,14) What I want to do is update the user table using the array of positions in the SET area. My goal is to be able to update the users without the need for their id (I have different size game boards, so I have multiple position arrays for each board size) How can I do something like this: UPDATE user SET position='(team1_position)' WHERE game = '7' AND team = '1' I feel like it would be a waste of resources to select all the id's of each team and update them separately.

    Read the article

  • How to search and validate plain texts (where it starts with http AND ends with .aspx) to be a valid hyperlink in a page body content?

    - by syntaxcode
    My web page content is populated by a plain text that is retrieved from a CDATA format - plain text data. This is the site http://checksite.apsx to get information. For more information, visit http://moresites.com/FAQ/index.html or search the site. Now, my goal is to convert this plain text to a valid hyperlinks. I've used a javascript code that does the conversion - /((http|https|ftp):\/\/[^ ]+)/g; , but sometimes if there are multiple words, it captures an invalid URL. My question: Is there a way to strictly capture any string that starts with "http" AND ends with ".html" or "aspx" will be converted into a valid hyperlink? it should look like this - This is the site http://checksite.apsx to get information. For more information, visit http://moresites.com/FAQ/index.html or search the site.

    Read the article

  • how to recieve mail using python>

    - by user600950
    Hey everyone, I am trying to make a program on my computer at home that will constantly check a certain gmail address. The purpose being the only email this adress recieves is from me. I would just like to be able to 1.Check for mail 2.Download mail(presumably to a string, though a file is acceptable) and 3. delete the mail from the web server but keep it on my computer. Thta is all i need to know right now, however my long term goal is to set up kind of a remote terminal over email, so that wherever i have email i have a certain amount of control over my computer.

    Read the article

  • Is it easier to create a chat client using java or PHP?

    - by Mendy
    first off, my goal: Create a chat client that requires the user to log in with a password. Would it be easier to create it with just HTML an PHP or java? EDIT: Sorry, I didn't mention. I have some experience in javascript. I'm afraid that is all. I could and would learn another language. So I want to know which language would be better to learn for this project? also, for the username and password login I'm guessing I would need some mySQL knowledge? Thanks.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Creating Custom Ajax Control Toolkit Controls

    - by Stephen Walther
    The goal of this blog entry is to explain how you can extend the Ajax Control Toolkit with custom Ajax Control Toolkit controls. I describe how you can create the two halves of an Ajax Control Toolkit control: the server-side control extender and the client-side control behavior. Finally, I explain how you can use the new Ajax Control Toolkit control in a Web Forms page. At the end of this blog entry, there is a link to download a Visual Studio 2010 solution which contains the code for two Ajax Control Toolkit controls: SampleExtender and PopupHelpExtender. The SampleExtender contains the minimum skeleton for creating a new Ajax Control Toolkit control. You can use the SampleExtender as a starting point for your custom Ajax Control Toolkit controls. The PopupHelpExtender control is a super simple custom Ajax Control Toolkit control. This control extender displays a help message when you start typing into a TextBox control. The animated GIF below demonstrates what happens when you click into a TextBox which has been extended with the PopupHelp extender. Here’s a sample of a Web Forms page which uses the control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowPopupHelp.aspx.cs" Inherits="MyACTControls.Web.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>Show Popup Help</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblSSN" Text="SSN:" AssociatedControlID="txtSSN" runat="server" /> <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblPhone" Text="Phone Number:" AssociatedControlID="txtPhone" runat="server" /> <asp:TextBox ID="txtPhone" runat="server" /> <act:PopupHelpExtender id="ph2" TargetControlID="txtPhone" HelpText="Please enter your phone number." runat="server" /> </div> </form> </body> </html> In the page above, the PopupHelp extender is used to extend the functionality of the two TextBox controls. When focus is given to a TextBox control, the popup help message is displayed. An Ajax Control Toolkit control extender consists of two parts: a server-side control extender and a client-side behavior. For example, the PopupHelp extender consists of a server-side PopupHelpExtender control (PopupHelpExtender.cs) and a client-side PopupHelp behavior JavaScript script (PopupHelpBehavior.js). Over the course of this blog entry, I describe how you can create both the server-side extender and the client-side behavior. Writing the Server-Side Code Creating a Control Extender You create a control extender by creating a class that inherits from the abstract ExtenderControlBase class. For example, the PopupHelpExtender control is declared like this: public class PopupHelpExtender: ExtenderControlBase { } The ExtenderControlBase class is part of the Ajax Control Toolkit. This base class contains all of the common server properties and methods of every Ajax Control Toolkit extender control. The ExtenderControlBase class inherits from the ExtenderControl class. The ExtenderControl class is a standard class in the ASP.NET framework located in the System.Web.UI namespace. This class is responsible for generating a client-side behavior. The class generates a call to the Microsoft Ajax Library $create() method which looks like this: <script type="text/javascript"> $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); }); </script> The JavaScript $create() method is part of the Microsoft Ajax Library. The reference for this method can be found here: http://msdn.microsoft.com/en-us/library/bb397487.aspx This method accepts the following parameters: type – The type of client behavior to create. The $create() method above creates a client PopupHelpBehavior. Properties – Enables you to pass initial values for the properties of the client behavior. For example, the initial value of the HelpText property. This is how server property values are passed to the client. Events – Enables you to pass client-side event handlers to the client behavior. References – Enables you to pass references to other client components. Element – The DOM element associated with the client behavior. This will be the DOM element associated with the control being extended such as the txtSSN TextBox. The $create() method is generated for you automatically. You just need to focus on writing the server-side control extender class. Specifying the Target Control All Ajax Control Toolkit extenders inherit a TargetControlID property from the ExtenderControlBase class. This property, the TargetControlID property, points at the control that the extender control extends. For example, the Ajax Control Toolkit TextBoxWatermark control extends a TextBox, the ConfirmButton control extends a Button, and the Calendar control extends a TextBox. You must indicate the type of control which your extender is extending. You indicate the type of control by adding a [TargetControlType] attribute to your control. For example, the PopupHelp extender is declared like this: [TargetControlType(typeof(TextBox))] public class PopupHelpExtender: ExtenderControlBase { } The PopupHelp extender can be used to extend a TextBox control. If you try to use the PopupHelp extender with another type of control then an exception is thrown. If you want to create an extender control which can be used with any type of ASP.NET control (Button, DataView, TextBox or whatever) then use the following attribute: [TargetControlType(typeof(Control))] Decorating Properties with Attributes If you decorate a server-side property with the [ExtenderControlProperty] attribute then the value of the property gets passed to the control’s client-side behavior. The value of the property gets passed to the client through the $create() method discussed above. The PopupHelp control contains the following HelpText property: [ExtenderControlProperty] [RequiredProperty] public string HelpText { get { return GetPropertyValue("HelpText", "Help Text"); } set { SetPropertyValue("HelpText", value); } } The HelpText property determines the help text which pops up when you start typing into a TextBox control. Because the HelpText property is decorated with the [ExtenderControlProperty] attribute, any value assigned to this property on the server is passed to the client automatically. For example, if you declare the PopupHelp extender in a Web Form page like this: <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" />   Then the PopupHelpExtender renders the call to the the following Microsoft Ajax Library $create() method: $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); You can see this call to the JavaScript $create() method by selecting View Source in your browser. This call to the $create() method calls a method named set_HelpText() automatically and passes the value “Please enter your social security number”. There are several attributes which you can use to decorate server-side properties including: ExtenderControlProperty – When a property is marked with this attribute, the value of the property is passed to the client automatically. ExtenderControlEvent – When a property is marked with this attribute, the property represents a client event handler. Required – When a value is not assigned to this property on the server, an error is displayed. DefaultValue – The default value of the property passed to the client. ClientPropertyName – The name of the corresponding property in the JavaScript behavior. For example, the server-side property is named ID (uppercase) and the client-side property is named id (lower-case). IDReferenceProperty – Applied to properties which refer to the IDs of other controls. URLProperty – Calls ResolveClientURL() to convert from a server-side URL to a URL which can be used on the client. ElementReference – Returns a reference to a DOM element by performing a client $get(). The WebResource, ClientResource, and the RequiredScript Attributes The PopupHelp extender uses three embedded resources named PopupHelpBehavior.js, PopupHelpBehavior.debug.js, and PopupHelpBehavior.css. The first two files are JavaScript files and the final file is a Cascading Style sheet file. These files are compiled as embedded resources. You don’t need to mark them as embedded resources in your Visual Studio solution because they get added to the assembly when the assembly is compiled by a build task. You can see that these files get embedded into the MyACTControls assembly by using Red Gate’s .NET Reflector tool: In order to use these files with the PopupHelp extender, you need to work with both the WebResource and the ClientScriptResource attributes. The PopupHelp extender includes the following three WebResource attributes. [assembly: WebResource("PopupHelp.PopupHelpBehavior.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.debug.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.css", "text/css", PerformSubstitution = true)] These WebResource attributes expose the embedded resource from the assembly so that they can be accessed by using the ScriptResource.axd or WebResource.axd handlers. The first parameter passed to the WebResource attribute is the name of the embedded resource and the second parameter is the content type of the embedded resource. The PopupHelp extender also includes the following ClientScriptResource and ClientCssResource attributes: [ClientScriptResource("MyACTControls.PopupHelpBehavior", "PopupHelp.PopupHelpBehavior.js")] [ClientCssResource("PopupHelp.PopupHelpBehavior.css")] Including these attributes causes the PopupHelp extender to request these resources when you add the PopupHelp extender to a page. If you open View Source in a browser which uses the PopupHelp extender then you will see the following link for the Cascading Style Sheet file: <link href="/WebResource.axd?d=0uONMsWXUuEDG-pbJHAC1kuKiIMteQFkYLmZdkgv7X54TObqYoqVzU4mxvaa4zpn5H9ch0RDwRYKwtO8zM5mKgO6C4WbrbkWWidKR07LD1d4n4i_uNB1mHEvXdZu2Ae5mDdVNDV53znnBojzCzwvSw2&amp;t=634417392021676003" type="text/css" rel="stylesheet" /> You also will see the following script include for the JavaScript file: <script src="/ScriptResource.axd?d=pIS7xcGaqvNLFBvExMBQSp_0xR3mpDfS0QVmmyu1aqDUjF06TrW1jVDyXNDMtBHxpRggLYDvgFTWOsrszflZEDqAcQCg-hDXjun7ON0Ol7EXPQIdOe1GLMceIDv3OeX658-tTq2LGdwXhC1-dE7_6g2&amp;t=ffffffff88a33b59" type="text/javascript"></script> The JavaScrpt file returned by this request to ScriptResource.axd contains the combined scripts for any and all Ajax Control Toolkit controls in a page. By default, the Ajax Control Toolkit combines all of the JavaScript files required by a page into a single JavaScript file. Combining files in this way really speeds up how quickly all of the JavaScript files get delivered from the web server to the browser. So, by default, there will be only one ScriptResource.axd include for all of the JavaScript files required by a page. If you want to disable Script Combining, and create separate links, then disable Script Combining like this: <act:ToolkitScriptManager ID="tsm" runat="server" CombineScripts="false" /> There is one more important attribute used by Ajax Control Toolkit extenders. The PopupHelp behavior uses the following two RequirdScript attributes to load the JavaScript files which are required by the PopupHelp behavior: [RequiredScript(typeof(CommonToolkitScripts), 0)] [RequiredScript(typeof(PopupExtender), 1)] The first parameter of the RequiredScript attribute represents either the string name of a JavaScript file or the type of an Ajax Control Toolkit control. The second parameter represents the order in which the JavaScript files are loaded (This second parameter is needed because .NET attributes are intrinsically unordered). In this case, the RequiredScript attribute will load the JavaScript files associated with the CommonToolkitScripts type and the JavaScript files associated with the PopupExtender in that order. The PopupHelp behavior depends on these JavaScript files. Writing the Client-Side Code The PopupHelp extender uses a client-side behavior written with the Microsoft Ajax Library. Here is the complete code for the client-side behavior: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { Type.registerNamespace('MyACTControls'); MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); Sys.registerComponent(MyACTControls.PopupHelpBehavior, { name: "popupHelp" }); } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })();   In the following sections, we’ll discuss how this client-side behavior works. Wrapping the Behavior for the Script Loader The behavior is wrapped with the following script: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { // Behavior Content } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })(); This code is required by the Microsoft Ajax Library Script Loader. You need this code if you plan to use a behavior directly from client-side code and you want to use the Script Loader. If you plan to only use your code in the context of the Ajax Control Toolkit then you can leave out this code. Registering a JavaScript Namespace The PopupHelp behavior is declared within a namespace named MyACTControls. In the code above, this namespace is created with the following registerNamespace() method: Type.registerNamespace('MyACTControls'); JavaScript does not have any built-in way of creating namespaces to prevent naming conflicts. The Microsoft Ajax Library extends JavaScript with support for namespaces. You can learn more about the registerNamespace() method here: http://msdn.microsoft.com/en-us/library/bb397723.aspx Creating the Behavior The actual Popup behavior is created with the following code. MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; The code above has two parts. The first part of the code is used to define the constructor function for the PopupHelp behavior. This is a factory method which returns an instance of a PopupHelp behavior: MyACTControls.PopupHelpBehavior = function (element) { } The second part of the code modified the prototype for the PopupHelp behavior: MyACTControls.PopupHelpBehavior.prototype = { } Any code which is particular to a single instance of the PopupHelp behavior should be placed in the constructor function. For example, the default value of the _helpText field is assigned in the constructor function: this._helpText = "Help Text"; Any code which is shared among all instances of the PopupHelp behavior should be added to the PopupHelp behavior’s prototype. For example, the public HelpText property is added to the prototype: get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, Registering a JavaScript Class After you create the PopupHelp behavior, you must register the behavior as a class by using the Microsoft Ajax registerClass() method like this: MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); This call to registerClass() registers PopupHelp behavior as a class which derives from the base Sys.Extended.UI.BehaviorBase class. Like the ExtenderControlBase class on the server side, the BehaviorBase class on the client side contains method used by every behavior. The documentation for the BehaviorBase class can be found here: http://msdn.microsoft.com/en-us/library/bb311020.aspx The most important methods and properties of the BehaviorBase class are the following: dispose() – Use this method to clean up all resources used by your behavior. In the case of the PopupHelp behavior, the dispose() method is used to remote the event handlers created by the behavior and disposed the Popup behavior. get_element() -- Use this property to get the DOM element associated with the behavior. In other words, the DOM element which the behavior extends. get_id() – Use this property to the ID of the current behavior. initialize() – Use this method to initialize the behavior. This method is called after all of the properties are set by the $create() method. Creating Debug and Release Scripts You might have noticed that the PopupHelp behavior uses two scripts named PopupHelpBehavior.js and PopupHelpBehavior.debug.js. However, you never create these two scripts. Instead, you only create a single script named PopupHelpBehavior.pre.js. The pre in PopupHelpBehavior.pre.js stands for preprocessor. When you build the Ajax Control Toolkit (or the sample Visual Studio Solution at the end of this blog entry), a build task named JSBuild generates the PopupHelpBehavior.js release script and PopupHelpBehavior.debug.js debug script automatically. The JSBuild preprocessor supports the following directives: #IF #ELSE #ENDIF #INCLUDE #LOCALIZE #DEFINE #UNDEFINE The preprocessor directives are used to mark code which should only appear in the debug version of the script. The directives are used extensively in the Microsoft Ajax Library. For example, the Microsoft Ajax Library Array.contains() method is created like this: $type.contains = function Array$contains(array, item) { //#if DEBUG var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "item", mayBeNull: true} ]); if (e) throw e; //#endif return (indexOf(array, item) >= 0); } Notice that you add each of the preprocessor directives inside a JavaScript comment. The comment prevents Visual Studio from getting confused with its Intellisense. The release version, but not the debug version, of the PopupHelpBehavior script is also minified automatically by the Microsoft Ajax Minifier. The minifier is invoked by a build step in the project file. Conclusion The goal of this blog entry was to explain how you can create custom AJAX Control Toolkit controls. In the first part of this blog entry, you learned how to create the server-side portion of an Ajax Control Toolkit control. You learned how to derive a new control from the ExtenderControlBase class and decorate its properties with the necessary attributes. Next, in the second part of this blog entry, you learned how to create the client-side portion of an Ajax Control Toolkit control by creating a client-side behavior with JavaScript. You learned how to use the methods of the Microsoft Ajax Library to extend your client behavior from the BehaviorBase class. Download the Custom ACT Starter Solution

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • ASR / SNMP on Exadata

    - by rene.kundersma
    Recently I worked with ASR on Exadata for multiple customers. ASR is a great functionality that enables your 'systems' to alert Oracle when hardware failures occur. Sun hardware is using ASM for sometime and since 2009/2010 this is also available for Exadata. My goal is not to re-write the documentation so for general information I like to refer to this link. So, where is this note about ? Well, it is about two things I experienced around setting up ASR. I like to provide my experience so others can be successful with ASR fast as well. (It is however expected that things will be updated in the latest documentation.) First, imagine yourself configuring SNMP traps to be sent to ASR. In this situation be sure to not erase any existing SNMP Subscribers settings for example the subscription to Enterprise Manager Grid Control or whatever you already subscribed for. So, when you have documentation stating to execute "cellcli -e alter cell snmpSubscriber=(host=, port=)" be sure to add existing snmpSubscribers when they exist. The syntax allows this: snmpSubscriber= ((host=host [,port=port] [,community=community][,type=ASR]) [,(host=host[,port=port][,community=community][,type=ASR])...) Second, when configuring SnmpSubscribers using DCLI you have to work with a slash to escape the brackets. Be sure to verify your SNMP settings after setting them because you might end up with a bracket in the 'asrs.state' file stating 'public\' in stead of 'public'. Having the extra slash after the word 'public' of course doesn't help when sending SNMP-traps: dcli -g dbs_group -l root -n "/opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl -validate_snmp_subscriber -type asr" cn38: Sending test trap to destination - 173.25.100.43:162 cn38: (1). count - 50 Failed to run "/usr/bin/snmptrap -v 2c -c public\ -M "+/opt/oracle.cellos/compmon/" -m SUN-HW-TRAP-MIB 173.25.100.43:162 "" SUN-HW- TRAP-MIB::sunHwTrapTestTrap sunHwTrapSystemIdentifier s " Sun Oracle Database Machine secret" sunHwTrapChassisId s "secret" sunHwTrapProductName s "SUN FIRE X4170 SERVER" sunHwTrapTestMessage s "This is a test trap. Exadata Compute Server: cn38.oracle.com "" cn38: getaddrinfo: +/opt/oracle.cellos/compmon/ Name or service not known cn38: snmptrap: Unknown host (+/opt/oracle.cellos/compmon/) All together ASR is a great addition to Exadata that I highly recommend. Some excellent documentation is written on the implementation details and available on MyOracleSupport. See "Oracle Database Machine Monitoring (Doc ID 1110675.1)" Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Talks Submitted for Ann Arbor Day of .NET 2010

    - by PSteele
    Just submitted my session abstracts for Ann Arbor's Day of .NET 2010.   Getting up to speed with .NET 3.5 -- Just in time for 4.0! Yes, C# 4.0 is just around the corner.  But if you haven't had the chance to use C# 3.5 extensively, this session will start from the ground up with the new features of 3.5.  We'll assume everyone is coming from C# 2.0.  This session will show you the details of extension methods, partial methods and more.  We'll also show you how LINQ -- Language Integrated Query -- can help decrease your development time and increase your code's readability.  If time permits, we'll look at some .NET 4.0 features, but the goal is to get you up to speed on .NET 3.5.   Go Ahead and Mock Me! When testing specific parts of your application, there can be a lot of external dependencies required to make your tests work.  Writing fake or mock objects that act as stand-ins for the real dependencies can waste a lot of time.  This is where mocking frameworks come in.  In this session, Patrick Steele will introduce you to Rhino Mocks, a popular mocking framework for .NET.  You'll see how a mocking framework can make writing unit tests easier and leads to less brittle unit tests.   Inversion of Control: Who's got control and why is it being inverted? No doubt you've heard of "Inversion of Control".  If not, maybe you've heard the term "Dependency Injection"?  The two usually go hand-in-hand.  Inversion of Control (IoC) along with Dependency Injection (DI) helps simplify the connections and lifetime of all of the dependent objects in the software you write.  In this session, Patrick Steele will introduce you to the concepts of IoC and DI and will show you how to use a popular IoC container (Castle Windsor) to help simplify the way you build software and how your objects interact with each other. If you're interested in speaking, hurry up and get your submissions in!  The deadline is Monday, April 5th! Technorati Tags: .NET,Ann Arbor,Day of .NET

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • Integrating Oracle Forms Applications 11g Into SOA (4-6/Mai/10)

    - by Claudia Costa
    Workshop Description This is a Free workshop of 3 days is targeted at Oracle Forms professionals interested in integrating Oracle Forms into a Service Oriented Architecture. The workshop highlights how Forms can be part of a Service Oriented Architecture, how the Oracle Forms functionalities make it possible to integrate existing (or new) Forms applications with new or existing development utilizing the Service Oriented Architecture concepts. The goal is to understand the incremental approach that Forms provides to developers who need to extend their business platform to JEE, allowing Oracle Forms customers to retain their investment in Oracle Forms while leveraging the opportunities offered by complementing technologies. During the event the attendees will implement the Oracle Forms functionalities that make it possible to integrate with SOA. Register Now! Prerequisites ·         Knowledge of the Oracle Forms development environment (mandatory) ·         Basic knowledge of the Oracle database ·         Basic knowledge of the Java Programming Language ·         Basic knowledge of Oracle Jdeveloper or another Java IDE   System Requirements   This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements:   ·         Laptop/PC with minimum 4 GB RAM ·         Oracle Database ·         Oracle Forms 11g R1 PS1 (WebLogic Server 10.1.3.2 + Portal, Forms, Reports and Discoverer ) ·         Oracle JDeveloper 11g R1 PS1 http://download.oracle.com/otn/java/jdeveloper/1112/jdevstudio11112install.exe ·         TCP-IP Loopback Adapter Installation (before the SOASuite installation) ·         Oracle SOASuite 11g R1 PS1 (without BAM component) When asked for an admin password, please use 'welcome1 http://download.oracle.com/otn/nt/middleware/11g/ofm_rcu_win_11.1.1.2.0_disk1_1of1.zip http://download.oracle.com/otn/nt/middleware/11g/ofm_soa_generic_11.1.1.2.0_disk1_1of1.zip ·         Oracle BI Publisher 10.1.3.4.1 http://download.oracle.com/otn/nt/ias/101341/bipublisher_windows_x86_101341.zip ·         Oracle BI Publisher Desktop 10.1.3.4. http://download.oracle.com/otn/nt/ias/101341/bipublisher_desktop_windows_x86_101341.zip   ·         At least 1 Oracle Forms solution already upgraded to the Oracle FMW 11g platform.   ------------------------------------------------------------------------------------------   Horário e Local:   4-6 de Maio / 9:30-18:00h Oracle, Porto Salvo Register Here Para mais informação por favor contacte: [email protected]

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • SOA &amp; E2.0 Partner Community Forum XIII registration is open

    - by Jürgen Kress
    INVITATION TO THE ORACLE SOA AND E2.0 PARTNER COMMUNITY FORUM Do you want to learn about how to sell the value of Fusion Middleware by combining SOA and E2.0 Solutions? We would like to invite you to become updated and trained at our SOA and E2.0 Partner Community Forum March on 15th and 16th 2011 in Utrecht, The Netherlands. Keynote: Andrew Sutherland and Andrew Gilboy The Oracle SOA and E2.0 Partner Community Forum is a wonderful opportunity to: learn how to sell the value of Fusion Middleware bij combining SOA and E2.0 solutions meet with Oracle SOA and E2.0 Product management exchange knowledge learn from successful SOA, BPM, WebCenter and UCM implementations understand Oracle's Fusion Applications Strategy network within the Oracle SOA Partner Community and the Oracle E2.0 Partner Community During this highly informative event you can learn about partner success stories, participate in an array of break out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Additionally to the SOA and E2.0 Partner Community Forum, you can participate in technical hands on workshops on March 17th and 18th. The goal of these workshops is to prepare you for customer implementations. Places are limited, so don't delay and register now by clicking here. Registration takes a few minutes and is free of charge, except in case of cancellation or no show (cancellation fee € 150). For more information, please visit our website. Best regards Jürgen Kress & Hans Blaas SOA & E2.0 Partner Adoption EMEA Agenda March 15th 2011 Welcome & Introduction Keynote Oracle Middleware Strategy and information on Application Grid and Exalogic Andrew Sutherland, SVP Middleware Sales EMEA, Oracle Keynote Managing Online Customer, Partner and Employee Engagement with Oracle E2.0 Solutions Andrew Gilboy, VP E2.0 Sales EMEA, Oracle Partner SOA/BPM Reference Case Partner WebCenter/UCM Reference Case SOA Suite PS3 David Shaffer, VP Product Management, Oracle Why Specialization is important for Partners Nick Kritikos, Hans Blaas & Jürgen Kress, Alliances & Channels, Oracle   Agenda March 16th 2011 Welcome & Introduction Day II Breakout round 1 - SOA Suite 11g PS3 & OSB - Importance of ADF & JDeveloper - SOA Security IDM - WebCenter PS3, Whats new - E2.0 Sales Plays Breakout round 2 - WebCenter PS3, Whats new - Application Management Enterprise manager and Amberpoint - ADF/WebCenter 11g integration with BPM Suite 11g - Importance of ADF & JDeveloper - JCAPS & OC4J migration opportunities for service business Breakout round 3 - BPM 11g: Whats new - Universal Content management 11g - SOA Security Management - E2.0 Surrounding Products: ATG, Documaker, Primavera - Middleware Industry Value Propositions & Sales Play Fusion Application SOA & E2.0 Summary & Closing For registration and additional information, please visit our website. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Community,SOA,SOA Partner Community Forum,SOA Community Forum,OPN,Jürgen Kress

    Read the article

  • Informal Interviews: Just Relax (or Should I?)

    - by david.talamelli
    I was in our St Kilda Rd office last week and had the chance to meet up with Dan and David from GradConnection. I love what these guys are doing, their business has been around for two years and I really like how they have taken their own experiences from University found a niche in their market and have chased it. These guys are always networking. Whenever they come to Melbourne they send me a tweet to catch up, even though we often miss each other they are persistent. It sounds like their business is going from strength to strength and I have to think that success comes from their hard work and enthusiasm for their business. Anyway, before my meeting with ProGrad I noticed a tweet from Kevin Wheeler who was saying it was his last day in Melbourne - I sent him a message and we met up that afternoon for a coffee (I am getting to the point I promise). On my way back to the office after my meeting I was on a tram and was sitting beside a lady who was talking to her friend on her mobile. She had just come back from an interview and was telling her friend how laid back the meeting was and how she wasn't too sure of the next steps of the process as it was a really informal meeting. The recurring theme from this phone call was that 1) her and the interviewer got along really well and had a lot in common 2) the meeting was very informal and relaxed. I wasn't at the interview so I cannot say for certain, but in my experience regardless of the type of interview that is happening whether it is a relaxed interview at a coffee shop or a behavioural interview in an office setting one thing is consistent: the employer is assessing your ability to perform the role and fit into the company. Different interviewers I find have different interviewing styles. For example some interviewers may create a very relaxed environment in the thinking this will draw out less practiced answers and give a more realistic view of the person and their abilities while other interviewers may put the candidate "under the pump" to see how they react in a stressful situation. There are as many interviewing styles as there are interviewers. I think candidates regardless of the type of interview need to be professional and honest in both their skills/experiences, abilities and career plans (if you know what they are). Even though an interview may be informal, you shouldn't slip into complacency. You should not forget the end goal of the interview which is to get a job. Business happens outside of the office walls and while you may meet someone for a coffee it is still a business meeting no matter how relaxed the setting. You don't need to be stick in the mud and not let your personality shine through, but that first impression you make may play a big part in how far in the interview process you go. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >