Search Results

Search found 42545 results on 1702 pages for 'confirm alert on browser close event'.

Page 299/1702 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Complex SQL query help on aggregating values for nested subquery

    - by François Beausoleil
    Hi! I have people, companies, employees, events and event kinds. I'm making a report/followup sheet where people, companies and employees are the rows, and the columns are event kinds. Event kinds are simple values describing: "Promised Donation", "Received Donation", "Phoned", "Followed up" and such. Event kinds are ordered: CREATE TABLE event_kinds ( id, name, position); Events hold the actual reference to the event: CREATE TABLE events ( id, person_id, company_id, referrer_id, event_kind_id, created_at); referrer_id is another reference to people. It is the person which sent the information/tip along, and is an optional field, although I sometimes want to filter on an event_kind that has a specific referrer, while I don't for other event kinds. Notice I don't have an employee ID reference. The reference exists, but is implied. I have application code to validate that person_id and company_id really reference an employee record. The other tables are pretty basic: CREATE TABLE people ( id, name); CREATE TABLE companies ( id, name); CREATE TABLE employees ( id, person_id, company_id); I'm trying to achieve the following report: Referrer Phoned Promised Donated Francois Feb 16th Feb 20th Mar 1st Apple (Steve Jobs) Steve Ballmer Mar 3rd IBM Bill Gates Mar 7th The first row is a people record, the 2nd is an employee, and the 3rd is a company. If I asked for referrer Bill Gates for Phoned event kinds, I'd only see the 3rd row, while asking for Steve and Phoned would return no rows. Right now, I do 3 queries, one for companies, one for people and a last one for employees. I want the event kind columns to be ordered, but I do that in application code and show it properly there. Here's where I'm at so far: SELECT companies.id, companies.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "companies" SELECT people.id, people.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "people" SELECT employees.id, employees.company_id, employees.person_id, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "employees" I rather suspect I'm doing this wrong. There should be an "easier" way to do it. One other filter criteria would be to filter on people/company names: WHERE LOWER(companies.name) LIKE '%apple%'. Note that I'm ordering by the dates of event_kind_9 here, and a secondary sort is by person/company name. To summarize: I want to paginate the result set, find the latest event for each cell, order the result set by the date of the latest event, and by company/person name, filter by referrer in some event kinds, but not others. For reference, I'm using PostgreSQL, from Ruby, ActiveRecord/Rails. The solution is pure SQL though.

    Read the article

  • Getting error to access webservice.

    - by nemade-vipin
    hi friend, I have created webapplication in which I am getting the error:- ypeError: Error #1009: Cannot access a property or method of a null object reference. at FlexSBTSApp/displayString()[E:\Users\User1\Documents\Flex Builder 3\FlexSBTSApp\src\FlexSBTSApp.mxml:38] at FlexSBTSApp/___FlexSBTSApp_Button1_click()[E:\Users\User1\Documents\Flex Builder 3\FlexSBTSApp\src\FlexSBTSApp.mxml:118] my code is:- import mx.controls.*; [Bindable] private var childName:ArrayCollection; [Bindable] private var childId:ArrayCollection; private var photoFeed:ArrayCollection; private var arrayOfchild:Array; [Bindable] private var childObj:Child; public var user:SBTSWebService; public function initApp():void { user = new SBTSWebService(); user.addSBTSWebServiceFaultEventListener(handleFaults); } public function displayString():void { // Instantiate a new Entry object. var newEntry:GetSBTSMobileAuthentication = new GetSBTSMobileAuthentication(); newEntry.mobile=mobileno.text; newEntry.password=password.text; user.addgetSBTSMobileAuthenticationEventListener(authenticationResult); user.getSBTSMobileAuthentication(newEntry); } public function handleFaults(event:FaultEvent):void { Alert.show("A fault occured contacting the server. Fault message is: " + event.fault.faultString); } public function authenticationResult(event:GetSBTSMobileAuthenticationResultEvent):void { if(event.result != null && event.result._return>0) { if(event.result._return > 0) { var UserId:int = event.result._return; getChildList(UserId); viewstack2.selectedIndex=1; } else { Alert.show("Authentication fail"); } } } public function getChildList(userId:int):void { var childEntry:GetSBTSMobileChildrenInfo = new GetSBTSMobileChildrenInfo(); childEntry.UserId = userId; user.addgetSBTSMobileChildrenInfoEventListener(sbtsChildrenInfoResult); user.getSBTSMobileChildrenInfo(childEntry); } public function sbtsChildrenInfoResult(event:GetSBTSMobileChildrenInfoResultEvent):void { if(event.result != null && event.result._return!=null) { arrayOfchild = event.result._return as Array; photoFeed = new ArrayCollection(arrayOfchild); childName = new ArrayCollection(); for( var count:int=0;count<photoFeed.length;count++) { childObj = photoFeed.getItemAt(count,0) as Child; childName.addItem(childObj.strName); } } } ]]> <mx:Panel width="500" height="300" headerColors="[#000000,#FFFFFF]"> <mx:TabNavigator id="viewstack2" selectedIndex="0" creationPolicy="all" width="100%" height="100%"> <mx:Form label="Login Form"> <mx:FormItem label="Mobile NO:"> <mx:TextInput id="mobileno" /> </mx:FormItem> <mx:FormItem label="Password:"> <mx:TextInput displayAsPassword="true" id="password" /> </mx:FormItem> <mx:FormItem> <mx:Button label="Login" click="displayString()"/> </mx:FormItem> </mx:Form> <mx:Form label="Child List"> <mx:Label width="100%" color="blue" text="Select Child."/> <mx:RadioButtonGroup id="radioGroup"/> <mx:Repeater id="fieldRepeater" dataProvider="{childName}"> <mx:RadioButton groupName="radioGroup" label="{fieldRepeater.currentItem}" value="{fieldRepeater.currentItem}"/> </mx:Repeater> </mx:Form> <mx:Form label="Child Information"> </mx:Form> <mx:Form label="Trace Path"> </mx:Form> </mx:TabNavigator> </mx:Panel>

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Macbook Pro Wireless Reconnecting

    - by A Student at a University
    I'm using a WPA2 EAP network. I'm sitting next to the access point. The connection keeps dropping and taking ~10 seconds to reconnect. My other devices are staying online. What's causing it? syslog: 01:21:10 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:21:10 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed reboot -> renew 01:21:10 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:21:10 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:21:10 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:33:30 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:33:30 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:33:30 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:14 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:14 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:14 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> 4-way handshake 01:35:14 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:17 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> disconnected 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:26 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:26 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:29 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 8 -> 3 (reason 11) 01:35:32 NetworkManager[XX40]: <info> (eth1): deactivating device (reason: 11). 01:35:32 NetworkManager[XX40]: <info> (eth1): canceled DHCP transaction, DHCP client pid XX27 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) starting connection 'Auto XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 3 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): access point 'Auto XXXXXXXXXX' has security, but secrets are required. 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 6 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 6 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): connection 'Auto XXXXXXXXXX' has security, and secrets exist. No new secrets needed. 01:35:32 NetworkManager[XX40]: <info> Config: added 'ssid' value 'XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Config: added 'scan_ssid' value '1' 01:35:32 NetworkManager[XX40]: <info> Config: added 'key_mgmt' value 'WPA-EAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'password' value '<omitted>' 01:35:32 NetworkManager[XX40]: <info> Config: added 'eap' value 'PEAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'fragment_size' value 'XXX0' 01:35:32 NetworkManager[XX40]: <info> Config: added 'phase2' value 'auth=MSCHAPV2' 01:35:32 NetworkManager[XX40]: <info> Config: added 'ca_cert' value '/etc/ssl/certs/Equifax_Secure_CA.pem' 01:35:32 NetworkManager[XX40]: <info> Config: added 'identity' value 'XXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Config: set interface ap_scan to 1 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:36 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> associated 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:36 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:36 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:36 wpa_supplicant[XX60]: WPA: Could not find AP from the scan results 01:35:36 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:36 NetworkManager[XX40]: <info> Activation (eth1/wireless) Stage 2 of 5 (Device Configure) successful. Connected to wireless network 'XXXXXXXXXX'. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) scheduled. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) started... 01:35:36 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 7 (reason 0) 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Beginning DHCPv4 transaction (timeout in 45 seconds) 01:35:36 NetworkManager[XX40]: <info> dhclient started with pid XX87 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) complete. 01:35:36 dhclient: Internet Systems Consortium DHCP Client VXXX.XXX.XXX 01:35:36 dhclient: Copyright 2004-2009 Internet Systems Consortium. 01:35:36 dhclient: All rights reserved. 01:35:36 dhclient: For info, please visit https://www.isc.org/software/dhcp/ 01:35:36 dhclient: 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed nbi -> preinit 01:35:36 dhclient: Listening on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on Socket/fallback 01:35:36 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:35:36 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:35:36 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed preinit -> reboot 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) started... 01:35:36 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:35:36 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) complete. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) started... 01:35:37 NetworkManager[XX40]: <info> (eth1): device state change: 7 -> 8 (reason 0) 01:35:37 NetworkManager[XX40]: <info> (eth1): roamed from BSSID XX:XX:XX:XX:XX:XX (XXXXXXXXXX) to XX:XX:XX:XX:XX:XX (XXXXXXXXX) 01:35:37 NetworkManager[XX40]: <info> Policy set 'Auto XXXXXXXXXX' (eth1) as default for IPv4 routing and DNS. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) successful, device activated. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) complete. 01:35:43 wpa_supplicant[XX60]: Trying to associate with XX:XX:XX:XX:XX:XX (SSID='XXXXXXXXXX' freq=2412 MHz) 01:35:43 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> associating 01:35:43 wpa_supplicant[XX60]: Association request to the driver failed 01:35:46 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associating -> associated 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:46 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:46 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:40:47 wpa_supplicant[XX60]: WPA: Group rekeying completed with XX:XX:XX:XX:XX:XX [GTK=TKIP] 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> group handshake 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:50:19 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:50:19 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • YUI Uploader hangs after choosing file

    - by stephenbayer
    Below is my entire code from a User control that contains the YUI Uploader. Is there something I'm missing. Right now, when I step through the javascript code in Firebug, it hangs on the first line of the upload() function. I have a breakpoint on the first line of the ashx that handles the file, but it is never called. So, it doesn't get that far. I figure I'm just missing something stupid. I've used this control many times before with no issues. I'm using all the css files and graphics provided by the samples folder in the YUI download. If I'm not missing anything, is there a more comprehensive way of debuging this issue then through stepping through the javascript with FireBug. I've tried turning the logging for YUI on and off, and never get any logs anywhere. I'm not sure where to go now. <style type="text/css"> #divFile { background-color:White; border:2px inset Ivory; height:21px; margin-left:-2px; margin-right:9px; width:125px; } </style> <ajaxToolkit:RoundedCornersExtender runat="server" Corners="All" Radius="6" ID="rceContainer" TargetControlID="pnlMMAdmin" /> <asp:Panel ID="pnlMMAdmin" runat="server" Width="100%" BackColor="Silver" ForeColor="#ffffff" Font-Bold="true" Font-Size="16px"> <div style="padding: 5px; text-align:center; width: 100%;"> <table style="width: 100% ; border: none; text-align: left;"> <tr> <td style="width: 460px; vertical-align: top;"> <!-- information panel --> <ajaxToolkit:RoundedCornersExtender runat="server" Corners="All" Radius="6" ID="RoundedCornersExtender1" TargetControlID="pnlInfo" /> <asp:Panel ID="pnlInfo" runat="server" Width="100%" BackColor="Silver" ForeColor="#ffffff" Font-Bold="true" Font-Size="16px"> <div id="infoPanel" style="padding: 5px; text-align:left; width: 100%;"> <table> <tr><td>Chart</td><td> <table><tr><td><div id="divFile" ></div></td><td><div id="uploaderContainer" style="width:60px; height:25px"></div></td></tr> <tr><td colspan="2"><div id="progressBar"></div></td></tr></table> </td></tr> </table> </div></asp:Panel> <script type="text/javascript" language="javascript"> WYSIWYG.attach('<%= txtComment.ClientID %>', full); var uploader = new YAHOO.widget.Uploader("uploaderContainer", "assets/buttonSkin.jpg"); uploader.addListener('contentReady', handleContentReady); uploader.addListener('fileSelect', onFileSelect) uploader.addListener('uploadStart', onUploadStart); uploader.addListener('uploadProgress', onUploadProgress); uploader.addListener('uploadCancel', onUploadCancel); uploader.addListener('uploadComplete', onUploadComplete); uploader.addListener('uploadCompleteData', onUploadResponse); uploader.addListener('uploadError', onUploadError); function handleContentReady() { // Allows the uploader to send log messages to trace, as well as to YAHOO.log uploader.setAllowLogging(false); // Restrict selection to a single file (that's what it is by default, // just demonstrating how). uploader.setAllowMultipleFiles(false); // New set of file filters. var ff = new Array({ description: "Images", extensions: "*.jpg;*.png;*.gif" }); // Apply new set of file filters to the uploader. uploader.setFileFilters(ff); } var fileID; function onFileSelect(event) { for (var item in event.fileList) { if (YAHOO.lang.hasOwnProperty(event.fileList, item)) { YAHOO.log(event.fileList[item].id); fileID = event.fileList[item].id; } } uploader.disable(); var filename = document.getElementById("divFile"); filename.innerHTML = event.fileList[fileID].name; var progressbar = document.getElementById("progressBar"); progressbar.innerHTML = "Please wait... Starting upload.... "; upload(fileID); } function upload(idFile) { // file hangs right here. ************************** progressBar.innerHTML = "Upload starting... "; if (idFile != null) { uploader.upload(idFile, "AdminFileUploader.ashx", "POST"); fileID = null; } } function handleClearFiles() { uploader.clearFileList(); uploader.enable(); fileID = null; var filename = document.getElementById("divFile"); filename.innerHTML = ""; var progressbar = document.getElementById("progressBar"); progressbar.innerHTML = ""; } function onUploadProgress(event) { prog = Math.round(300 * (event["bytesLoaded"] / event["bytesTotal"])); progbar = "<div style=\"background-color: #f00; height: 5px; width: " + prog + "px\"/>"; var progressbar = document.getElementById("progressBar"); progressbar.innerHTML = progbar; } function onUploadComplete(event) { uploader.clearFileList(); uploader.enable(); progbar = "<div style=\"background-color: #f00; height: 5px; width: 300px\"/>"; var progressbar = document.getElementById("progressBar"); progressbar.innerHTML = progbar; alert('File Uploaded'); } function onUploadStart(event) { alert('upload start'); } function onUploadError(event) { alert('upload error'); } function onUploadCancel(event) { alert('upload cancel'); } function onUploadResponse(event) { alert('upload response'); } </script>

    Read the article

  • Refresh Button in java

    - by lakshmi
    I need to make a simple refresh button for a gui java program I have the button made but its not working properly. I just am not sure what the code needs to be to make the button work. Any help would be really appreciated.I figured the code below ` public class CompanySample extends Panel { DBServiceAsync dbService = GWT.create(DBService.class); Panel panel = new Panel(); public Panel CompanySampledetails(com.viji.example.domain.Record trec, int count) { panel.setWidth(800); Panel borderPanel = new Panel(); Panel centerPanel = new Panel(); final FormPanel formPanel = new FormPanel(); formPanel.setFrame(true); formPanel.setLabelAlign(Position.LEFT); formPanel.setPaddings(5); formPanel.setWidth(800); final Panel inner = new Panel(); inner.setLayout(new ColumnLayout()); Panel columnOne = new Panel(); columnOne.setLayout(new FitLayout()); GridPanel gridPanel = null; if (gridPanel != null) { gridPanel.getView().refresh(); gridPanel.clear(); } gridPanel = new SampleGrid(trec); gridPanel.setHeight(450); gridPanel.setTitle("Company Data"); final RowSelectionModel sm = new RowSelectionModel(true); sm.addListener(new RowSelectionListenerAdapter() { public void onRowSelect(RowSelectionModel sm, int rowIndex, Record record) { formPanel.getForm().loadRecord(record); } }); gridPanel.setSelectionModel(sm); gridPanel.doOnRender(new Function() { public void execute() { sm.selectFirstRow(); } }, 10); columnOne.add(gridPanel); inner.add(columnOne, new ColumnLayoutData(0.6)); final FieldSet fieldSet = new FieldSet(); fieldSet.setLabelWidth(90); fieldSet.setTitle("company Details"); fieldSet.setAutoHeight(true); fieldSet.setBorder(false); final TextField txtcompanyname = new TextField("Name", "companyname", 120); final TextField txtcompanyaddress = new TextField("Address", "companyaddress", 120); final TextField txtcompanyid = new TextField("Id", "companyid", 120); txtcompanyid.setVisible(false); fieldSet.add(txtcompanyid); fieldSet.add(txtcompanyname); fieldSet.add(txtcompanyaddress); final Button addButton = new Button(); final Button deleteButton = new Button(); final Button modifyButton = new Button(); final Button refeshButton = new Button(); addButton.setText("Add"); deleteButton.setText("Delete"); modifyButton.setText("Modify"); refeshButton.setText("Refresh"); fieldSet.add(addButton); fieldSet.add(deleteButton); fieldSet.add(modifyButton); fieldSet.add(refeshButton); final ButtonListenerAdapter buttonClickListener = new ButtonListenerAdapter() { public void onClick(Button button, EventObject e) { if (button == refeshButton) { sendDataToServ("Refresh"); } } }; addButton.addListener(new ButtonListenerAdapter() { @Override public void onClick(Button button, EventObject e) { if (button.getText().equals("Add")) { sendDataToServer("Add"); } } private void sendDataToServer(String action) { String txtcnameToServer = txtcompanyname.getText().trim(); String txtcaddressToServer = txtcompanyaddress.getText().trim(); if (txtcnameToServer.trim().equals("") || txtcaddressToServer.trim().equals("")) { Window .alert("Incomplete Data. Fill/Select all the needed fields"); action = "Nothing"; } AsyncCallback ascallback = new AsyncCallback<String>() { @Override public void onFailure(Throwable caught) { Window.alert("Record not inserted"); } @Override public void onSuccess(String result) { Window.alert("Record inserted"); } }; if (action.trim().equals("Add")) { System.out.println("Before insertServer"); dbService.insertCompany(txtcnameToServer, txtcaddressToServer, ascallback); } } }); deleteButton.addListener(new ButtonListenerAdapter() { @Override public void onClick(Button button, EventObject e) { if (button.getText().equals("Delete")) { sendDataToServer("Delete"); } } private void sendDataToServer(String action) { String txtcidToServer = txtcompanyid.getText().trim(); String txtcnameToServer = txtcompanyname.getText().trim(); String txtcaddressToServer = txtcompanyaddress.getText().trim(); if (!action.equals("Delete")) { if (txtcidToServer.trim().equals("") || txtcnameToServer.trim().equals("") || txtcaddressToServer.trim().equals("")) { Window .alert("Incomplete Data. Fill/Select all the needed fields"); action = "Nothing"; } } else if (txtcidToServer.trim().equals("")) { Window.alert("Doesn't deleted any row"); } AsyncCallback ascallback = new AsyncCallback<String>() { @Override public void onFailure(Throwable caught) { // TODO Auto-generated method stub Window.alert("Record not deleted"); } @Override public void onSuccess(String result) { Window.alert("Record deleted"); } }; if (action.trim().equals("Delete")) { System.out.println("Before deleteServer"); dbService.deleteCompany(Integer.parseInt(txtcidToServer), ascallback); } } }); modifyButton.addListener(new ButtonListenerAdapter() { @Override public void onClick(Button button, EventObject e) { if (button.getText().equals("Modify")) { sendDataToServer("Modify"); } } private void sendDataToServer(String action) { // TODO Auto-generated method stub System.out.println("ACTION :----->" + action); String txtcidToServer = txtcompanyid.getText().trim(); String txtcnameToServer = txtcompanyname.getText().trim(); String txtcaddressToServer = txtcompanyaddress.getText().trim(); if (txtcnameToServer.trim().equals("") || txtcaddressToServer.trim().equals("")) { Window .alert("Incomplete Data. Fill/Select all the needed fields"); action = "Nothing"; } AsyncCallback ascallback = new AsyncCallback<String>() { @Override public void onFailure(Throwable caught) { Window.alert("Record not Updated"); } @Override public void onSuccess(String result) { Window.alert("Record Updated"); } }; if (action.equals("Modify")) { System.out.println("Before UpdateServer"); dbService.modifyCompany(Integer.parseInt(txtcidToServer), txtcnameToServer, txtcaddressToServer, ascallback); } } }); inner.add(new PaddedPanel(fieldSet, 0, 10, 0, 0), new ColumnLayoutData( 0.4)); formPanel.add(inner); borderPanel.add(centerPanel, new BorderLayoutData(RegionPosition.CENTER)); centerPanel.add(formPanel); panel.add(borderPanel); return panel; } public void sendDataToServ(String action) { AsyncCallback<com.viji.example.domain.Record> callback = new AsyncCallback<com.viji.example.domain.Record>() { @Override public void onFailure(Throwable caught) { System.out.println("Failure"); } public void onSuccess(com.viji.example.domain.Record result) { CompanySampledetails(result, 1); } }; dbService.getRecords(callback); } } class SampleGrid extends GridPanel { private static BaseColumnConfig[] columns = new BaseColumnConfig[] { new ColumnConfig("Name", "companyname", 120, true), new ColumnConfig("Address", "companyaddress", 120, true), }; private static RecordDef recordDef = new RecordDef(new FieldDef[] { new IntegerFieldDef("companyid"), new StringFieldDef("companyname"), new StringFieldDef("companyaddress") }); public SampleGrid(com.viji.example.domain.Record trec) { Object[][] data = new Object[0][];// getCompanyDataSmall(); MemoryProxy proxy = new MemoryProxy(data); ArrayReader reader = new ArrayReader(recordDef); Store store = new SimpleStore(new String[] { "companyid", "companyname", "companyaddress" }, new Object[0][]); store.load(); ArrayList<Company> Companydetails = trec.getCompanydetails(); for (int i = 0; i < Companydetails.size(); i++) { Company company = Companydetails.get(i); store.add(recordDef.createRecord(new Object[] { company.getCompanyid(), company.getCompanyName(), company.getCompanyaddress() })); } store.commitChanges(); setStore(store); ColumnModel columnModel = new ColumnModel(columns); setColumnModel(columnModel); } } `

    Read the article

  • How do I debug this javascript -- I don't get an error in Firebug but it's not working as expected.

    - by Angela
    I installed the plugin better-edit-in-place (http://github.com/nakajima/better-edit-in-place) but I dont' seem to be able to make it work. The plugin creates javascript, and also automatically creates a rel and class. The expected behavior is to make an edit-in-place, but it currently is not. Nothing happens when I mouse over. When I use firebug, it is rendering the value to be edited correctly: <span rel="/emails/1" id="email_1_days" class="editable">7</span> And it is showing the full javascript which should work on class editable. I didn't copy everything, just the chunks that seemed should be operationable if I have a class name in the DOM. // Editable: Better in-place-editing // http://github.com/nakajima/nakatype/wikis/better-edit-in-place-editable-js var Editable = Class.create({ initialize: function(element, options) { this.element = $(element); Object.extend(this, options); // Set default values for options this.editField = this.editField || {}; this.editField.type = this.editField.type || 'input'; this.onLoading = this.onLoading || Prototype.emptyFunction; this.onComplete = this.onComplete || Prototype.emptyFunction; this.field = this.parseField(); this.value = this.element.innerHTML; this.setupForm(); this.setupBehaviors(); }, // In order to parse the field correctly, it's necessary that the element // you want to edit in place for have an id of (model_name)_(id)_(field_name). // For example, if you want to edit the "caption" field in a "Photo" model, // your id should be something like "photo_#{@photo.id}_caption". // If you want to edit the "comment_body" field in a "MemberBlogPost" model, // it would be: "member_blog_post_#{@member_blog_post.id}_comment_body" parseField: function() { var matches = this.element.id.match(/(.*)_\d*_(.*)/); this.modelName = matches[1]; this.fieldName = matches[2]; if (this.editField.foreignKey) this.fieldName += '_id'; return this.modelName + '[' + this.fieldName + ']'; }, // Create the editing form for the editable and inserts it after the element. // If window._token is defined, then we add a hidden element that contains the // authenticity_token for the AJAX request. setupForm: function() { this.editForm = new Element('form', { 'action': this.element.readAttribute('rel'), 'style':'display:none', 'class':'in-place-editor' }); this.setupInputElement(); if (this.editField.tag != 'select') { this.saveInput = new Element('input', { type:'submit', value: Editable.options.saveText }); if (this.submitButtonClass) this.saveInput.addClassName(this.submitButtonClass); this.cancelLink = new Element('a', { href:'#' }).update(Editable.options.cancelText); if (this.cancelButtonClass) this.cancelLink.addClassName(this.cancelButtonClass); } var methodInput = new Element('input', { type:'hidden', value:'put', name:'_method' }); if (typeof(window._token) != 'undefined') { this.editForm.insert(new Element('input', { type: 'hidden', value: window._token, name: 'authenticity_token' })); } this.editForm.insert(this.editField.element); if (this.editField.type != 'select') { this.editForm.insert(this.saveInput); this.editForm.insert(this.cancelLink); } this.editForm.insert(methodInput); this.element.insert({ after: this.editForm }); }, // Create input element - text input, text area or select box. setupInputElement: function() { this.editField.element = new Element(this.editField.type, { 'name':this.field, 'id':('edit_' + this.element.id) }); if(this.editField['class']) this.editField.element.addClassName(this.editField['class']); if(this.editField.type == 'select') { // Create options var options = this.editField.options.map(function(option) { return new Option(option[0], option[1]); }); // And assign them to select element options.each(function(option, index) { this.editField.element.options[index] = options[index]; }.bind(this)); // Set selected option try { this.editField.element.selectedIndex = $A(this.editField.element.options).find(function(option) { return option.text == this.element.innerHTML; }.bind(this)).index; } catch(e) { this.editField.element.selectedIndex = 0; } // Set event handlers to automaticall submit form when option is changed this.editField.element.observe('blur', this.cancel.bind(this)); this.editField.element.observe('change', this.save.bind(this)); } else { // Copy value of the element to the input this.editField.element.value = this.element.innerHTML; } }, // Sets up event handles for editable. setupBehaviors: function() { this.element.observe('click', this.edit.bindAsEventListener(this)); if (this.saveInput) this.editForm.observe('submit', this.save.bindAsEventListener(this)); if (this.cancelLink) this.cancelLink.observe('click', this.cancel.bindAsEventListener(this)); }, // Event Handler that activates form and hides element. edit: function(event) { this.element.hide(); this.editForm.show(); this.editField.element.activate ? this.editField.element.activate() : this.editField.element.focus(); if (event) event.stop(); }, // Event handler that makes request to server, then handles a JSON response. save: function(event) { var pars = this.editForm.serialize(true); var url = this.editForm.readAttribute('action'); this.editForm.disable(); new Ajax.Request(url + ".json", { method: 'put', parameters: pars, onSuccess: function(transport) { var json = transport.responseText.evalJSON(); var value; if (json[this.modelName]) { value = json[this.modelName][this.fieldName]; } else { value = json[this.fieldName]; } // If we're using foreign key, read value from the form // instead of displaying foreign key ID if (this.editField.foreignKey) { value = $A(this.editField.element.options).find(function(option) { return option.value == value; }).text; } this.value = value; this.editField.element.value = this.value; this.element.update(this.value); this.editForm.enable(); if (Editable.afterSave) { Editable.afterSave(this); } this.cancel(); }.bind(this), onFailure: function(transport) { this.cancel(); alert("Your change could not be saved."); }.bind(this), onLoading: this.onLoading.bind(this), onComplete: this.onComplete.bind(this) }); if (event) { event.stop(); } }, // Event handler that restores original editable value and hides form. cancel: function(event) { this.element.show(); this.editField.element.value = this.value; this.editForm.hide(); if (event) { event.stop(); } }, // Removes editable behavior from an element. clobber: function() { this.element.stopObserving('click'); try { this.editForm.remove(); delete(this); } catch(e) { delete(this); } } }); // Editable class methods. Object.extend(Editable, { options: { saveText: 'Save', cancelText: 'Cancel' }, create: function(element) { new Editable(element); }, setupAll: function(klass) { klass = klass || '.editable'; $$(klass).each(Editable.create); } }); But when I point my mouse at the element, no in-place-editing action!

    Read the article

  • Php and Jquery Validation: with Jquery Form Plugin

    - by Jacinto
    Hi, This is the first time I have attempted to make a form using jquery and php. I used the folks over at Mid Mo Design as an example but even with that tutorial am still having trouble getting it to do what I want. This is the code I have been using. As well as jquery 1.4.1 and jQuery Form Plugin 2.43. Any help would be greatly appreciated. css scrollContact { border-top: double 1px #0D0D0D; padding: 100px 50px 50px 50px; background: #020303; position: relative; overflow: hidden; width: 924px; text-align: justify; } .contactInfo { float:left; width: 214px; margin-right: 10px; margin-top: 5px; } contactForm { float: left; width: 700px; } contactForm span { float: left; margin:5px; width: 455px; } input, textarea { -moz-border-radius:5px 5px 5px 5px; border:1px solid #001932; color:#BBBBBB; font:1.1em Verdana,Geneva,sans-serif; background: #0A0A0A; } input:hover, textarea:hover { border:1px solid #0278f2; background: #242424; } contactForm span input { line-height:1.8em; width:430px; padding:11px 10px; margin: 0px 0px 10px 0px; } contactForm input { line-height:1.8em; width:200px; padding:11px 10px; margin: 5px; } contactForm textarea { height:190px; line-height:1.8em; width:430px; padding:10px; } .message { background:#eee; color:#000; display:none; padding:10px; height: 70px; position: absolute; bottom:0px; } Html Contact Navigate To: Work services about contact Get A Free Quote Thank you for your interest in contacting me. Please use the form to the right to contact me via email. I will respond to your inquiry as soon as possible. Please note all fields are required. What Next? Thank you for your interest in contacting me. Please use the form to the right to contact me via email. I will respond to your inquiry as soon as possible. Please note all fields are required. Your Message Php <?php $sendto = '[email protected]'; $subject = 'Contact from contact form'; $errormessage = 'Oops! There seems to have been a problem. May we suggest...'; $thanks = "Thanks for the email! We'll get back to you as soon as possible!"; $honeypot = "You filled in the honeypot! If you're human, try again!"; $emptyname = 'Entering your name?'; $emptyemail = 'Entering your email address?'; $emptytitle = 'Entering The Subject?'; $emptymessage = 'Entering a message?'; $alertname = 'Entering your name using only the standard alphabet?'; $alertemail = 'Entering your email in this format: [email protected]?'; $alerttitle = 'Entering the subject using only the standard alphabet?'; $alertmessage = "Making sure you aren't using any parenthesis or other escaping characters in the message? Most URLS are fine though!"; $alert = ''; $pass = 0; function clean_var($variable) { $variable = strip_tags(stripslashes(trim(rtrim($variable)))); return $variable; } if ( empty($_REQUEST['last']) ) { if ( empty($_REQUEST['contactName']) ) { $pass = 1; $alert .= "" . $emptyname . ""; } elseif ( ereg( "[][{}()*+?.\^$|]", $_REQUEST['contactName'] ) ) { $pass = 1; $alert .= "" . $alertname . ""; } if ( empty($_REQUEST['contactEmail']) ) { $pass = 1; $alert .= "" . $emptyemail . ""; } elseif ( !eregi("^[_a-z0-9-]+(.[_a-z0-9-]+)@[a-z0-9-]+(.[a-z0-9-]+)(.[a-z]{2,3})$", $_REQUEST['contactEmail']) ) { $pass = 1; $alert .= "" . $alertemail . ""; } if ( empty($_REQUEST['contactTitle']) ) { $pass = 1; $alert .= "" . $emptytitle . ""; } elseif ( ereg( "[][{}()*+?.\^$|]", $_REQUEST['contactTitle'] ) ) { $pass = 1; $alert .= "" . $alerttitle . ""; } if ( empty($_REQUEST['contactMessage']) ) { $pass = 1; $alert .= "" . $emptymessage . ""; } elseif ( ereg( "[][{}()*+?\^$|]", $_REQUEST['contactMessage'] ) ) { $pass = 1; $alert .= "" . $alertmessage . ""; } if ( $pass==1 ) { echo "$(\".message\").hide(\"slow\").show(\"slow\"); "; echo "" . $errormessage . ""; echo ""; echo $alert; echo ""; } elseif (isset($_REQUEST['message'])) { $message = "From: " . clean_var($_REQUEST['contactName']) . "\n"; $message .= "Email: " . clean_var($_REQUEST['contactEmail']) . "\n"; $message .= "Telephone: " . clean_var($_REQUEST['contactTitle']) . "\n"; $message .= "Message: \n" . clean_var($_REQUEST['contactMessage']); $header = 'From:'. clean_var($_REQUEST['contactEmail']); mail($sendto, $subject, $message, $header); echo "$(\".message\").hide(\"slow\").show(\"slow\").animate({opacity: 1.0}, 4000).hide(\"slow\"); $(':input').clearForm() "; echo $thanks; die(); } } else { echo "$(\".message\").hide(\"slow\").show(\"slow\"); "; echo $honeypot; } ?

    Read the article

  • The problem about use the exist sqlite database,

    - by flybirdtt
    I have a sqlite database, and i put this file in "assets" folder. The code like below, Pls help and tell what's wrong in this code, How to use my own sqlite database. public class DataBaseHelper extends SQLiteOpenHelper { private static String DB_PATH = "/data/data/com.SGMalls/databases/"; private static String DB_NAME = "mallMapv2.sqlite"; private SQLiteDatabase myDataBase; private final Context myContext; public DataBaseHelper(Context context) { super(context, DB_NAME, null, 1); this.myContext = context; } public void createDataBase() throws IOException { File dbDir = new File(DB_PATH); if (!dbDir.exists()) { dbDir.mkdir(); } boolean dbExist = checkDataBase(); if (dbExist) { } else { this.getReadableDatabase(); try { copyDataBase(); } catch (IOException e) { throw new Error("Error copying database"); } } close(); } private boolean checkDataBase() { SQLiteDatabase checkDB = null; boolean isnull=false; try { String myPath = DB_PATH + DB_NAME; checkDB = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); } catch (SQLiteException e) { // database does't exist yet. } if (checkDB != null) { isnull=true; checkDB.close(); } return isnull; } private void copyDataBase() throws IOException { InputStream myInput = myContext.getAssets().open(DB_NAME); String outFileName = DB_PATH + DB_NAME; OutputStream myOutput = new FileOutputStream(outFileName); byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer)) > 0) { myOutput.write(buffer, 0, length); } // Close the streams myOutput.flush(); myOutput.close(); myInput.close(); } public void openDataBase() throws SQLException { // Open the database String myPath = DB_PATH + DB_NAME; myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); } @Override public synchronized void close() { if (myDataBase != null) myDataBase.close(); super.close(); } @Override public void onCreate(SQLiteDatabase db) { } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } } public class GetData { private static String DB_PATH = "/data/data/com.SGMalls/databases/mallMapv2.sqlite"; // private static String DB_NAME = "mallMapv2.sqlite"; public static ArrayList<Mall> getMalls() { ArrayList<Mall> mallsList = new ArrayList<Mall>(); SQLiteDatabase malldatabase = SQLiteDatabase.openDatabase(DB_PATH, null, SQLiteDatabase.OPEN_READONLY); String queryString="select id,title from malls order by title"; Cursor cursor=malldatabase.rawQuery(queryString, null); if(cursor!=null){ cursor.moveToFirst(); while(!cursor.isLast()){ Mall mall=new Mall(); mall.setMallid(cursor.getInt(0)); mall.setMallname(cursor.getString(0)); mallsList.add(mall); cursor.moveToNext(); } } malldatabase.close(); return mallsList; } } The error message: ERROR/Database(725): sqlite3_open_v2("/data/data/com.SGMalls/databases/ mallMapv2.sqlite", &handle, 1, NULL) failed 03-15 22:34:11.747: ERROR/AndroidRuntime(725): Uncaught handler: thread main exiting due to uncaught exception 03-15 22:34:11.766: ERROR/AndroidRuntime(725): java.lang.Error: Error copying database Thanks very much

    Read the article

  • How to update the user profile through tokenid when a particular user logs in using his username and password

    - by Rani
    i have an application in which when the user enters his username and password in the login form , his username and password are submitted to the server database for validation and if he is a valid user a tokenid is generated for him through JSON response and he is allowed to log in.this is the JSON response that i get in the console when a valid user log in through the log in form : {"TokenID":"kaenn43otO","isError":false,"ErrorMessage":"","Result":[{"UserId":"164","FirstName":"Indu","LastName":"nair","Email":"[email protected]","ProfileImage":null,"ThumbnailImage":null,"DeviceInfoId":"22"}],"ErrorCode":900} 2011-06-24 11:56:57.639 Journey[5526:207] this is token id:kaenn43otO so when the same user wants to update his profile by clicking on the update tab using the tokenid So that if he want he can his firstname,lastname,password,email.etc.i i have done the following code to update the user profile using tokenid but it does not work.i get tokenid when the user log in and this tokenid i have to pass to other class: this is my code to update user profile: -(void)sendRequest { //this is the class in which i get the tokenid and i am passing it to this class apicontroller *apitest = [[apicontroller alloc]init]; NSString *tokentest = apitest.tokenapi; NSString *post = [NSString stringWithFormat:@"Token=%@&firstname=%@&lastname=%@&Username=%@&Password=%@&Email=%@",tokentest,txtfirstName.text,txtlast.text,txtUserName.text,txtPassword.text,txtEmail.text]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSLog(@"%@",postLength); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://192.168.0.1:96/JourneyMapperAPI?RequestType=Register&Command=SET"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { webData = [[NSMutableData data] retain]; NSLog(@"%@",webData); [theConnection start]; } else { } } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSString *loginStatus = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(@"%@",loginStatus); [loginStatus release]; [connection release]; [webData release]; } -(IBAction)click:(id)sender { [self sendRequest]; } this is my apicontroller.m file in which i am creating login form so that user login and in this page tokenid is generated -(void)sendRequest { UIDevice *device = [UIDevice currentDevice]; NSString *udid = [device uniqueIdentifier]; //NSString *sysname = [device systemName]; NSString *sysver = [device systemVersion]; NSString *model = [device model]; NSLog(@"idis:%@",[device uniqueIdentifier]); NSLog(@"system nameis :%@",[device systemName]); NSLog(@"System version is:%@",[device systemVersion]); NSLog(@"System model is:%@",[device model]); NSLog(@"device orientation is:%d",[device orientation]); NSString *post = [NSString stringWithFormat:@"Loginkey=%@&Password=%@&DeviceCode=%@&Firmware=%@&IMEI=%@",txtUserName.text,txtPassword.text,model,sysver,udid]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSLog(@"%@",postLength); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://192.168.0.1:96/JourneyMapperAPI?RequestType=Login"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { webData = [[NSMutableData data] retain]; NSLog(@"%@",webData); } else { } } //to select username and password from database. -(void)check { //app.journeyList = [[NSMutableArray alloc]init]; [self createEditableCopyOfDatabaseIfNeeded]; NSString *filePath = [self getWritableDBPath]; sqlite3 *database; if(sqlite3_open([filePath UTF8String], &database) == SQLITE_OK) { const char *sqlStatement = "SELECT Username,Password FROM UserInformation where Username=? and Password=?"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { sqlite3_bind_text(compiledStatement, 1, [txtUserName.text UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement, 2, [txtPassword.text UTF8String], -1, SQLITE_TRANSIENT); //NSString *loginname= [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; // NSString *loginPassword = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; } if(sqlite3_step(compiledStatement) != SQLITE_ROW ) { NSLog( @"Save Error: %s", sqlite3_errmsg(database) ); UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"UIAlertView" message:@"User is not valid" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; alert = nil; } else { isUserValid = YES; if (isUserValid) { UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"UIAlertView" message:@"Valid User" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; } } sqlite3_finalize(compiledStatement); } sqlite3_close(database); } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSString *loginStatus = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(@"%@",loginStatus); //this is to perfrom insert opertion on the userinformation table NSString *json_string = [[NSString alloc] initWithData:webData encoding:NSUTF8StringEncoding]; NSDictionary *result = [json_string JSONValue]; //here i am storing the tokenid tokenapi = [result objectForKey:@"TokenID"]; NSLog(@"this is token id:%@",tokenapi); // BOOL errortest = [[result objectForKey:@"isError"] boolValue]; if(errortest == FALSE) { values = [result objectForKey:@"Result"]; NSLog(@"Valid User"); } else { NSLog(@"Invalid User"); } NSMutableArray *results = [[NSMutableArray alloc] init]; for (int index = 0; index<[values count]; index++) { NSMutableDictionary * value = [values objectAtIndex:index]; Result * result = [[Result alloc] init]; //through this i get the userid result.UserID = [value objectForKey:@"UserId"]; result.FirstName = [value objectForKey:@"FirstName"]; result.Username =[value objectForKey:@"Username"]; result.Email =[value objectForKey:@"Email"]; result.ProfileImage =[value objectForKey:@"ProfileImage"]; result.ThumbnailImage =[value objectForKey:@"ThumbnailImage"]; result.DeviceInfoId =[value objectForKey:@"DeviceInfoId"]; NSLog(@"%@",result.UserID); [results addObject:result]; [result release]; } for (int index = 0; index<[results count]; index++) { Result * result = [results objectAtIndex:index]; //save the object variables to database here [self createEditableCopyOfDatabaseIfNeeded]; NSString *filePath = [self getWritableDBPath]; sqlite3 *database; if(sqlite3_open([filePath UTF8String], &database) == SQLITE_OK) { const char *sqlStatement = "insert into UserInformation(UserID,DeviceId,Username,Password,FirstName,Email) VALUES (?,?,?,?,?,?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { //sqlite3_bind_text( compiledStatement, 1, [journeyid UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 1, [result.UserID UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement, 2, [result.DeviceInfoId UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement, 3, [txtUserName.text UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement, 4, [txtPassword.text UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text (compiledStatement, 5, [result.FirstName UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text (compiledStatement, 6, [result.Email UTF8String],-1,SQLITE_TRANSIENT); } if(sqlite3_step(compiledStatement) != SQLITE_DONE ) { NSLog( @"Save Error: %s", sqlite3_errmsg(database) ); } else { UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"UIAlertView" message:@"Record added" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; alert = nil; } sqlite3_finalize(compiledStatement); } sqlite3_close(database); } [loginStatus release]; [connection release]; [webData release]; } -(IBAction)click:(id)sender { [self sendRequest]; //this is to select username and password from database. [self check]; } for a particular user who logs in only his profile must be update by using tokenid and userid.Please help me in solving this problem.Thanks

    Read the article

  • PHP errors -> Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt | Warning: mysqli_stmt::c

    - by Tunji Gbadamosi
    I keep getting this error while trying to modify some tables. Here's my code: /** <- line 320 * * @param array $guests_array * @param array $tickets_array * @param integer $seat_count * @param integer $order_count * @param integer $guest_count */ private function book_guests($guests_array, $tickets_array, &$seat_count, &$order_count, &$guest_count){ /* @var $guests_array ArrayObject */ $sucess = false; if(sizeof($guests_array) >= 1){ //$this->mysqli->autocommit(FALSE); //insert the guests into guest, person, order, seat $menu_stmt = $this->mysqli->prepare("SELECT id FROM menu WHERE name=?"); $menu_stmt->bind_param('s',$menu); //$menu_stmt->bind_result($menu_id); $table_stmt = $this->mysqli->prepare("SELECT id FROM tables WHERE name=?"); $table_stmt->bind_param('s',$table); //$table_stmt->bind_result($table_id); $seat_stmt = $this->mysqli->prepare("SELECT id FROM seat WHERE name=? AND table_id=?"); $seat_stmt->bind_param('ss',$seat, $table_id); //$seat_stmt->bind_result($seat_id); for($i=0;$i<sizeof($guests_array);$i++){ $menu = $guests_array[$i]['menu']; $table = $guests_array[$i]['table']; $seat = $guests_array[$i]['seat']; //get menu id if($menu_stmt->execute()){ $menu_stmt->bind_result($menu_id); while($menu_stmt->fetch()) ; } $menu_stmt->close(); //get table id if($table_stmt->execute()){ $table_stmt->bind_result($table_id); while($table_stmt->fetch()) ; } $table_stmt->close(); //get seat id if($seat_stmt->execute()){ $seat_stmt->bind_result($seat_id); while($seat_stmt->fetch()) ; } $seat_stmt->close(); $dob = $this->create_date($guests_array[$i]['dob_day'], $guests_array[$i]['dob_month'], $guests_array[$i]['dob_year']); $id = $this->add_person($guests_array[$i]['first_name'], $guests_array[$i]['surname'], $dob, $guests_array[$i]['sex']); if(is_string($id)){ $seat = $this->add_seat($table_id, $seat_id, $id); /* @var $tickets_array ArrayObject */ $guest = $this->add_guest($id,$tickets_array[$i+1],$menu_id, $this->volunteer_id); /* @var $order integer */ $order = $this->add_order($this->volunteer_id, $table_id, $seat_id, $id); if($guest == 1 && $seat == 1 && $order == 1){ $seat_count += $seat; $guest_count += $guest; $order_count += $order; $success = true; } } } } return $success; } <- line 406 Here are the warnings: The person PRSN10500000LZPH has been added to the guest tablePRSN10500000LZPH added to table (1), seat (1)The order for person(PRSN10500000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (1)PRSN10600000LZPH added to table (1), seat (13)The person PRSN10600000LZPH has been added to the guest tableThe order for person(PRSN10600000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13) Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 358 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 363 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 366 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 371 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 374 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 379 PRSN10700000LZPH added to table (1), seat (13)The person PRSN10700000LZPH has been added to the guest tableThe order for person(PRSN10700000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13) Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 358 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 363 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 366 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 371 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 374 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 379 PRSN10800000LZPH added to table (1), seat (13)The person PRSN10800000LZPH has been added to the guest tableThe order for person(PRSN10800000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13)

    Read the article

  • java ioexception error=24 too many files open

    - by MattS
    I'm writing a genetic algorithm that needs to read/write lots of files. The fitness test for the GA is invoking a program called gradif, which takes a file as input and produces a file as output. Everything is working except when I make the population size and/or the total number of generations of the genetic algorithm too large. Then, after so many generations, I start getting this: java.io.FileNotFoundException: testfiles/GradifOut29 (Too many open files). (I get it repeatedly for many different files, the index 29 was just the one that came up first last time I ran it). It's strange because I'm not getting the error after the first or second generation, but after a significant amount of generations, which would suggest that each generation opens up more files that it doesn't close. But as far as I can tell I'm closing all of the files. The way the code is set up is the main() function is in the Population class, and the Population class contains an array of Individuals. Here's my code: Initial creation of input files (they're random access so that I could reuse the same file across multiple generations) files = new RandomAccessFile[popSize]; for(int i=0; i<popSize; i++){ files[i] = new RandomAccessFile("testfiles/GradifIn"+i, "rw"); } At the end of the entire program: for(int i=0; i<individuals.length; i++){ files[i].close(); } Inside the Individual's fitness test: FileInputStream fin = new FileInputStream("testfiles/GradifIn"+index); FileOutputStream fout = new FileOutputStream("testfiles/GradifOut"+index); Process process = Runtime.getRuntime().exec ("./gradif"); OutputStream stdin = process.getOutputStream(); InputStream stdout = process.getInputStream(); Then, later.... try{ fin.close(); fout.close(); stdin.close(); stdout.close(); process.getErrorStream().close(); }catch (IOException ioe){ ioe.printStackTrace(); } Then, afterwards, I append an 'END' to the files to make parsing them easier. FileWriter writer = new FileWriter("testfiles/GradifOut"+index, true); writer.write("END"); try{ writer.close(); }catch(IOException ioe){ ioe.printStackTrace(); } My redirection of stdin and stdout for gradif are from this answer. I tried using the try{close()}catch{} syntax to see if there was a problem with closing any of the files (there wasn't), and I got that from this answer. It should also be noted that the Individuals' fitness tests run concurrently. UPDATE: I've actually been able to narrow it down to the exec() call. In my most recent run, I first ran in to trouble at generation 733 (with a population size of 100). Why are the earlier generations fine? I don't understand why, if there's no leaking, the algorithm should be able to pass earlier generations but fail on later generations. And if there is leaking, then where is it coming from? UPDATE2: In trying to figure out what's going on here, I would like to be able to see (preferably in real-time) how many files the JVM has open at any given point. Is there an easy way to do that?

    Read the article

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • System lockout, SAM error 12294

    - by Suneel
    We are facing a login issue due to a SAM error with the error code 12294. Event Type: Error Event Source: SAM Event Category: None Event ID: 12294 Date: Time: User: Computer: Description: The SAM database was unable to lockout the account of due to a resource error, such as a hard disk write failure (the specific error code is in the error data) . Accounts are locked after a certain number of bad passwords are provided so please consider resetting the password of the account mentioned above.

    Read the article

  • Windows computer account appears to reset its own password, why?

    - by David Yu
    Has anyone seen this where a computer account appears to reset its password? The password for user 'WEST\SQLCLUSTER$' was reset by 'WEST\SQLCLUSTER$' on 'DOMAINCONTROLLER.WEST.company.corp' at '04/23/10 20:47:41' Event Type: Success Audit Event Source: Security Event Category: Account Management Event ID: 628 Date: Friday, April 23, 2010 Time: 8:47 PM User: WEST\SQLCLUSTER$ Computer: DOMAINCONTROLLER.WEST.company.corp Description: User Account password set: Target Account Name: SQLCLUSTER$ Target Domain: WEST Target Account ID: WEST\SQLCLUSTER$ Caller User Name: SQLCLUSTER$ Caller Domain: WEST Caller Logon ID: (0x0,0x7A518945)

    Read the article

  • jQuery AutoComplete select firing after change?

    - by Zarigani
    I'm using the jQuery UI AutoComplete control (just updated to jQuery UI 1.8.1). Whenever the user leaves the text box, I want to set the contents of the text box to a known-good value and set a hidden ID field for the value that was selected. Additionally, I want the page to post back when the contents of the text box are changed. Currently, I am implementing this by having the autocomplete select event set the hidden id and then a change event on the text box which sets the textbox value and, if necessary, causes a post back. If the user just uses the keyboard, this works perfectly. You can type, use the up and down arrows to select a value and then tab to exit. The select event fires, the id is set and then the change event fires and the page posts back. If the user starts typing and then uses the mouse to pick from the autocomplete options though, the change event fires (as focus shifts to the autocomplete menu?) and the page posts back before the select event has a chance to set the ID. Is there a way to get the change event to not fire until after the select event, even when a mouse is used? $(function() { var txtAutoComp_cache = {}; var txtAutoComp_current = { label: $('#txtAutoComp').val(), id: $('#hiddenAutoComp_ID').val() }; $('#txtAutoComp').change(function() { if (this.value == '') { txtAutoComp_current = null; } if (txtAutoComp_current) { this.value = txtAutoComp_current.label ? txtAutoComp_current.label : txtAutoComp_current; $('#hiddenAutoComp_ID').val(txtAutoComp_current.id ? txtAutoComp_current.id : txtAutoComp_current); } else { this.value = ''; $('#hiddenAutoComp_ID').val(''); } // Postback goes here }); $('#txtAutoComp').autocomplete({ source: function(request, response) { var jsonReq = '{ "prefixText": "' + request.term.replace('\\', '\\\\').replace('"', '\\"') + '", "count": 0 }'; if (txtAutoComp_cache.req == jsonReq && txtAutoComp_cache.content) { response(txtAutoComp_cache.content); return; } $.ajax({ url: 'ajaxLookup.asmx/CatLookup', contentType: 'application/json; charset=utf-8', dataType: 'json', data: jsonReq, type: 'POST', success: function(data) { txtAutoComp_cache.req = jsonReq; txtAutoComp_cache.content = data.d; response(data.d); if (data.d && data.d[0]) { txtAutoComp_current = data.d[0]; } } }); }, select: function(event, ui) { if (ui.item) { txtAutoComp_current = ui.item; } $('#hiddenAutoComp_ID').val(ui.item ? ui.item.id : ''); } }); });

    Read the article

  • Windows LiveID "Couldn't sign you out" error at sign-out

    - by Jason
    I'm implementing LiveID authentication on my website. I've done it before, but not on this particular platform, MojoPortal. The sign-in works properly, but when I attempt to sign-out, I get the error message quoted below. My browser is not blocking cookies. I get the same message when logging in to and out of, say, MSDN with a LiveID too now. I can't figure out if there's something about my site's programming that is interfering with the sign-out process of LiveID (since I believe that all (recent?) websites get sent a sign-out command) OR if live.com is just having issues lately and this is a coincidence. Couldn't sign you out We couldn't sign you out because your browser is blocking cookies. To sign out, close all of your browser windows. To keep this from happening again, change your browser's settings to allow cookies. If you don't know how to do that, see your browser's help.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >