Search Results

Search found 5371 results on 215 pages for 'keys'.

Page 214/215 | < Previous Page | 210 211 212 213 214 215  | Next Page >

  • Applying ServiceKnownTypeAttribute to WCF service with Spring

    - by avidgoffer
    I am trying to apply the ServiceKnownTypeAttribute to my WCF Service but keep getting the error below my config. Does anyone have any ideas? <object id="HHGEstimating" type="Spring.ServiceModel.ServiceExporter, Spring.Services"> <property name="TargetName" value="HHGEstimatingHelper"/> <property name="Name" value="HHGEstimating"/> <property name="Namespace" value="http://www.igcsoftware.com/HHGEstimating"/> <property name="TypeAttributes"> <list> <ref local="wcfErrorBehavior"/> <ref local="wcfSilverlightFaultBehavior"/> <object type="System.ServiceModel.ServiceKnownTypeAttribute, System.ServiceModel"> <constructor-arg name="type" value="IGCSoftware.HHG.Business.UserControl.AtlasUser, IGCSoftware.HHG.Business"/> </object> </list> </property> Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Spring.Objects.Factory.ObjectCreationException: Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ObjectCreationException: Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'] Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveInnerObjectDefinition(String name, String innerObjectName, String argumentName, IObjectDefinition definition, Boolean singletonOwner) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:300 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:150 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:112 Spring.Objects.Factory.Config.ManagedList.Resolve(String objectName, IObjectDefinition definition, String propertyName, ManagedCollectionElementResolver resolver) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Config\ManagedList.cs:126 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:201 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:112 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.ApplyPropertyValues(String name, RootObjectDefinition definition, IObjectWrapper wrapper, IPropertyValues properties) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:373 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.PopulateObject(String name, RootObjectDefinition definition, IObjectWrapper wrapper) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:563 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.ConfigureObject(String name, RootObjectDefinition definition, IObjectWrapper wrapper) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:1844 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:918 Spring.Objects.Factory.Support.AbstractObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractObjectFactory.cs:2120 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObjectInternal(String name, Type requiredType, Object[] arguments, Boolean suppressConfigure) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractObjectFactory.cs:2046 Spring.Objects.Factory.Support.DefaultListableObjectFactory.PreInstantiateSingletons() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\DefaultListableObjectFactory.cs:505 Spring.Context.Support.AbstractApplicationContext.Refresh() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\AbstractApplicationContext.cs:911 _dynamic_Spring.Context.Support.XmlApplicationContext..ctor(Object[] ) +197 Spring.Reflection.Dynamic.SafeConstructor.Invoke(Object[] arguments) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Reflection\Dynamic\DynamicConstructor.cs:116 Spring.Context.Support.RootContextInstantiator.InvokeContextConstructor(ConstructorInfo ctor) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:550 Spring.Context.Support.ContextInstantiator.InstantiateContext() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:494 Spring.Context.Support.ContextHandler.InstantiateContext(IApplicationContext parentContext, Object configContext, String contextName, Type contextType, Boolean caseSensitive, String[] resources) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:330 Spring.Context.Support.ContextHandler.Create(Object parent, Object configContext, XmlNode section) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:280 [ConfigurationErrorsException: Error creating context 'spring.root': Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'] System.Configuration.BaseConfigurationRecord.EvaluateOne(String[] keys, SectionInput input, Boolean isTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult) +202 System.Configuration.BaseConfigurationRecord.Evaluate(FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult, Boolean getLkg, Boolean getRuntimeObject, Object& result, Object& resultRuntimeObject) +1061 System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject) +1431 System.Configuration.BaseConfigurationRecord.GetSection(String configKey, Boolean getLkg, Boolean checkPermission) +56 System.Configuration.BaseConfigurationRecord.GetSection(String configKey) +8 System.Web.Configuration.HttpConfigurationSystem.GetApplicationSection(String sectionName) +45 System.Web.Configuration.HttpConfigurationSystem.GetSection(String sectionName) +49 System.Web.Configuration.HttpConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String configKey) +6 System.Configuration.ConfigurationManager.GetSection(String sectionName) +78 Spring.Util.ConfigurationUtils.GetSection(String sectionName) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Util\ConfigurationUtils.cs:69 Spring.Context.Support.ContextRegistry.InitializeContextIfNeeded() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextRegistry.cs:340 Spring.Context.Support.ContextRegistry.GetContext() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextRegistry.cs:206 Spring.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String reference, Uri[] baseAddresses) in l:\projects\spring-net\trunk\src\Spring\Spring.Services\ServiceModel\Activation\ServiceHostFactory.cs:66 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +11687036 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +42 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +479 [ServiceActivationException: The service '/HHGEstimating.svc' cannot be activated due to an exception during compilation. The exception message is: Error creating context 'spring.root': Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'.] System.ServiceModel.AsyncResult.End(IAsyncResult result) +11592858 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +194 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.ExecuteSynchronous(HttpApplication context, Boolean flowContext) +176 System.ServiceModel.Activation.HttpModule.ProcessRequest(Object sender, EventArgs e) +275 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75

    Read the article

  • How to use Koala Facebook Graph API?

    - by reko
    I am a Rails newbie. I want to use Koala's Graph API. In my controller @graph = Koala::Facebook::API.new('myFacebookAccessToken') @hello = @graph.get_object("my.Name") When I do this, I get something like this { "id"=>"123456", "name"=>"First Middle Last", "first_name"=>"First", "middle_name"=>"Middle", "last_name"=>"Last", "link"=>"http://www.facebook.com/MyName", "username"=>"my.name", "birthday"=>"12/12/1212", "hometown"=>{"id"=>"115200305133358163", "name"=>"City, State"}, "location"=>{"id"=>"1054648928202133335", "name"=>"City, State"}, "bio"=>"This is my awesome Bio.", "quotes"=>"I am the master of my fate; I am the captain of my soul. - William Ernest Henley\r\n\r\n"Don't go around saying the world owes you a living. The world owes you nothing. It was here first.\" - Mark Twain", "work"=>[{"employer"=>{"id"=>"100751133333", "name"=>"Company1"}, "position"=>{"id"=>"105763693332790962", "name"=>"Position1"}, "start_date"=>"2010-08", "end_date"=>"2011-07"}], "sports"=>[{"id"=>"104019549633137", "name"=>"Sport1"}, {"id"=>"103992339636529", "name"=>"Sport2"}], "favorite_teams"=>[{"id"=>"105467226133353743", "name"=>"Fav1"}, {"id"=>"19031343444432369133", "name"=>"Fav2"}, {"id"=>"98027790139333", "name"=>"Fav3"}, {"id"=>"104055132963393331", "name"=>"Fav4"}, {"id"=>"191744431437533310", "name"=>"Fav5"}], "favorite_athletes"=>[{"id"=>"10836600585799922", "name"=>"Fava1"}, {"id"=>"18995689436787722", "name"=>"Fava2"}, {"id"=>"11156342219404022", "name"=>"Fava4"}, {"id"=>"11169998212279347", "name"=>"Fava5"}, {"id"=>"122326564475039", "name"=>"Fava6"}], "inspirational_people"=>[{"id"=>"16383141733798", "name"=>"Fava7"}, {"id"=>"113529011990793335", "name"=>"fava8"}, {"id"=>"112032333138809855566", "name"=>"Fava9"}, {"id"=>"10810367588423324", "name"=>"Fava10"}], "education"=>[{"school"=>{"id"=>"13478880321332322233663", "name"=>"School1"}, "type"=>"High School", "with"=>[{"id"=>"1401052755", "name"=>"Friend1"}]}, {"school"=>{"id"=>"11482777188037224", "name"=>"School2"}, "year"=>{"id"=>"138383069535219", "name"=>"2005"}, "type"=>"High School"}, {"school"=>{"id"=>"10604484633093514", "name"=>"School3"}, "year"=>{"id"=>"142963519060927", "name"=>"2010"}, "concentration"=>[{"id"=>"10407695629335773", "name"=>"c1"}], "type"=>"College"}, {"school"=>{"id"=>"22030497466330708", "name"=>"School4"}, "degree"=>{"id"=>"19233130157477979", "name"=>"c3"}, "year"=>{"id"=>"201638419856163", "name"=>"2011"}, "type"=>"Graduate School"}], "gender"=>"male", "interested_in"=>["female"], "relationship_status"=>"Single", "religion"=>"Religion1", "political"=>"Political1", "email"=>"[email protected]", "timezone"=>-8, "locale"=>"en_US", "languages"=>[{"id"=>"10605952233759137", "name"=>"English"}, {"id"=>"10337617475934611", "name"=>"L2"}, {"id"=>"11296944428713061", "name"=>"L3"}], "verified"=>true, "updated_time"=>"2012-02-24T04:18:05+0000" } How do I show this entire hash in the view in a good format? This is what I did from what ever I learnt.. In my view <% @hello.each do |key, value| %> <li><%=h "#{key.to_s} : #{value.to_s}" %></li> <% end %> This will get the entire thing converted to a list... It works awesome if its just one key.. but how to work with multiple keys and show only the information... something like when it outputs hometown : City, State rather than something like hometown : {"id"=>"115200305133358163", "name"=>"City, State"} Also for education if I just say education[school][name] to display list of schools attended? The error i get is can't convert String into Integer I also tried to do this in my controller, but I get the same error.. @fav_teams = @hello["favorite_teams"]["name"] Also, how can I save all these to the database.. something like just the list of all schools.. not their id no's? Update: The way I plan to save to my database is.. lets say for a user model, i want to save to database as :facebook_id, :facebook_name, :facebook_firstname, ...., :facebook_hometown .. here I only want to save name... when it comes to education.. I want to save.. school, concentration and type.. I have no idea on how to achieve this.. Looking forward for help! thanks!

    Read the article

  • Please help me understand why my XSL Transform is not transforming

    - by Damovisa
    I'm trying to transform one XML format to another using XSL. Try as I might, I can't seem to get a result. I've hacked away at this for a while now and I've had no success. I'm not even getting any exceptions. I'm going to post the entire code and hopefully someone can help me work out what I've done wrong. I'm aware there are likely to be problems in the xsl I have in terms of selects and matches, but I'm not fussed about that at the moment. The output I'm getting is the input XML without any XML tags. The transformation is simply not occurring. Here's my XML Document: <?xml version="1.0"?> <Transactions> <Account> <PersonalAccount> <AccountNumber>066645621</AccountNumber> <AccountName>A Smith</AccountName> <CurrentBalance>-200125.96</CurrentBalance> <AvailableBalance>0</AvailableBalance> <AccountType>LOAN</AccountType> </PersonalAccount> </Account> <StartDate>2010-03-01T00:00:00</StartDate> <EndDate>2010-03-23T00:00:00</EndDate> <Items> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>12000</Amount> <Reference>Transaction 1</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-15T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>11000</Amount> <Reference>Transaction 2</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-14T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> </Items> </Transactions> Here's my XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" /> <xsl:param name="currentdate"></xsl:param> <xsl:template match="Transactions"> <xsl:element name="OFX"> <xsl:element name="SIGNONMSGSRSV1"> <xsl:element name="SONRS"> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="DTSERVER"><xsl:value-of select="$currentdate" /></xsl:element> <xsl:element name="LANGUAGE">ENG</xsl:element> </xsl:element> </xsl:element> <xsl:element name="BANKMSGSRSV1"> <xsl:element name="STMTTRNRS"> <xsl:element name="TRNUID">1</xsl:element> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="STMTRS"> <xsl:element name="CURDEF">AUD</xsl:element> <xsl:element name="BANKACCTFROM"> <xsl:element name="BANKID">RAMS</xsl:element> <xsl:element name="ACCTID"><xsl:value-of select="Account/PersonalAccount/AccountNumber" /></xsl:element> <xsl:element name="ACCTTYPE"><xsl:value-of select="Account/PersonalAccount/AccountType" /></xsl:element> </xsl:element> <xsl:element name="BANKTRANLIST"> <xsl:element name="DTSTART"><xsl:value-of select="StartDate" /></xsl:element> <xsl:element name="DTEND"><xsl:value-of select="EndDate" /></xsl:element> <xsl:for-each select="Items/Transaction"> <xsl:element name="STMTTRN"> <xsl:element name="TRNTYPE"><xsl:choose><xsl:when test="IsCredit">CREDIT</xsl:when><xsl:otherwise>DEBIT</xsl:otherwise></xsl:choose></xsl:element> <xsl:element name="DTPOSTED"><xsl:value-of select="EffectiveDate" /></xsl:element> <xsl:element name="DTUSER"><xsl:value-of select="CreatedDate" /></xsl:element> <xsl:element name="TRNAMT"><xsl:value-of select="Amount" /></xsl:element> <xsl:element name="FITID" /> <xsl:element name="NAME"><xsl:value-of select="Reference" /></xsl:element> <xsl:element name="MEMO"><xsl:value-of select="Reference" /></xsl:element> </xsl:element> </xsl:for-each> </xsl:element> <xsl:element name="LEDGERBAL"> <xsl:element name="BALAMT"><xsl:value-of select="Account/PersonalAccount/CurrentBalance" /></xsl:element> <xsl:element name="DTASOF"><xsl:value-of select="EndDate" /></xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:template> </xsl:stylesheet> Here's my method to transform my XML: public string TransformToXml(XmlElement xmlElement, Dictionary<string, object> parameters) { string strReturn = ""; // Load the XSLT Document XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(xsltFileName); // arguments XsltArgumentList args = new XsltArgumentList(); if (parameters != null && parameters.Count > 0) { foreach (string key in parameters.Keys) { args.AddParam(key, "", parameters[key]); } } //Create a memory stream to write to Stream objStream = new MemoryStream(); // Apply the transform xslt.Transform(xmlElement, args, objStream); objStream.Seek(0, SeekOrigin.Begin); // Read the contents of the stream StreamReader objSR = new StreamReader(objStream); strReturn = objSR.ReadToEnd(); return strReturn; } The contents of strReturn is an XML tag (<?xml version="1.0" encoding="utf-8"?>) followed by a raw dump of the contents of the original XML document, stripped of XML tags. What am I doing wrong here?

    Read the article

  • Some problems with GridView in webpart with multiple filters.

    - by NF_81
    Hello, I'm currently working on a highly configurable Database Viewer webpart for WSS 3.0 which we are going to need for several customized sharepoint sites. Sorry in advance for the large wall of text, but i fear it's necessary to recap the whole issue. As background information and to describe my problem as good as possible, I'll start by telling you what the webpart shall do: Basically the webpart contains an UpdatePanel, which contains a GridView and an SqlDataSource. The select-query the Datasource uses can be set via webbrowseable properties or received from a consumer method from another webpart. Now i wanted to add a filtering feature to the webpart, so i want a dropdownlist in the headerrow for each column that should be filterable. As the select-query is completely dynamic and i don't know at design time which columns shall be filterable, i decided to add a webbrowseable property to contain an xml-formed string with filter information. So i added the following into OnRowCreated of the gridview: void gridView_RowCreated(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) { for (int i = 0; i < e.Row.Cells.Count; i++) { if (e.Row.Cells[i].GetType() == typeof(DataControlFieldHeaderCell)) { string headerText = ((DataControlFieldHeaderCell)e.Row.Cells[i]).ContainingField.HeaderText; // add sorting functionality if (_allowSorting && !String.IsNullOrEmpty(headerText)) { Label l = new Label(); l.Text = headerText; l.ForeColor = Color.Blue; l.Font.Bold = true; l.ID = "Header" + i; l.Attributes["title"] = "Sort by " + headerText; l.Attributes["onmouseover"] = "this.style.cursor = 'pointer'; this.style.color = 'red'"; l.Attributes["onmouseout"] = "this.style.color = 'blue'"; l.Attributes["onclick"] = "__doPostBack('" + panel.UniqueID + "','SortBy$" + headerText + "');"; e.Row.Cells[i].Controls.Add(l); } // check if this column shall be filterable if (!String.IsNullOrEmpty(filterXmlData)) { XmlNode columnNode = GetColumnNode(headerText); if (columnNode != null) { string dataValueField = columnNode.Attributes["DataValueField"] == null ? "" : columnNode.Attributes["DataValueField"].Value; string filterQuery = columnNode.Attributes["FilterQuery"] == null ? "" : columnNode.Attributes["FilterQuery"].Value; if (!String.IsNullOrEmpty(dataValueField) && !String.IsNullOrEmpty(filterQuery)) { SqlDataSource ds = new SqlDataSource(_conStr, filterQuery); DropDownList cbx = new DropDownList(); cbx.ID = "FilterCbx" + i; cbx.Attributes["onchange"] = "__doPostBack('" + panel.UniqueID + "','SelectionChange$" + headerText + "$' + this.options[this.selectedIndex].value);"; cbx.Width = 150; cbx.DataValueField = dataValueField; cbx.DataSource = ds; cbx.DataBound += new EventHandler(cbx_DataBound); cbx.PreRender += new EventHandler(cbx_PreRender); cbx.DataBind(); e.Row.Cells[i].Controls.Add(cbx); } } } } } } } GetColumnNode() checks in the filter property, if there is a node for the current column, which contains information about the Field the DropDownList should bind to, and the query for filling in the items. In cbx_PreRender() i check ViewState and select an item in case of a postback. In cbx_DataBound() i just add tooltips to the list items as the dropdownlist has a fixed width. Previously, I used AutoPostback and SelectedIndexChanged of the DDL to filter the grid, but to my disappointment it was not always fired. Now i check __EVENTTARGET and __EVENTARGUMENT in OnLoad and call a function when the postback event was due to a selection change in a DDL: private void FilterSelectionChanged(string columnName, string selectedValue) { columnName = "[" + columnName + "]"; if (selectedValue.IndexOf("--") < 0 ) // "-- All --" selected { if (filter.ContainsKey(columnName)) filter[columnName] = "='" + selectedValue + "'"; else filter.Add(columnName, "='" + selectedValue + "'"); } else { filter.Remove(columnName); } gridView.PageIndex = 0; } "filter" is a HashTable which is stored in ViewState for persisting the filters (got this sample somewhere on the web, don't remember where). In OnPreRender of the webpart, i call a function which reads the ViewState and apply the filterExpression to the datasource if there is one. I assume i had to place it here, because if there is another postback (e.g. for sorting) the filters are not applied any more. private void ApplyGridFilter() { string args = " "; int i = 0; foreach (object key in filter.Keys) { if (i == 0) args = key.ToString() + filter[key].ToString(); else args += " AND " + key.ToString() + filter[key].ToString(); i++; } dataSource.FilterExpression = args; ViewState.Add("FilterArgs", filter); } protected override void OnPreRender(EventArgs e) { EnsureChildControls(); if (WebPartManager.DisplayMode.Name == "Edit") { errMsg = "Webpart in Edit mode..."; return; } if (useWebPartConnection == true) // get select-query from consumer webpart { if (provider != null) { dataSource.SelectCommand = provider.strQuery; } } try { int currentPageIndex = gridView.PageIndex; if (!String.IsNullOrEmpty(m_SortExpression)) { gridView.Sort("[" + m_SortExpression + "]", m_SortDirection); } gridView.PageIndex = currentPageIndex; // for some reason, the current pageindex resets after sorting ApplyGridFilter(); gridView.DataBind(); } catch (Exception ex) { Functions.ShowJavaScriptAlert(Page, ex.Message); } base.OnPreRender(e); } So i set the filterExpression and the call DataBind(). I don't know if this is ok on this late stage.. don't have a lot of asp.net experience after all. If anyone can suggest a better solution, please give me a hint. This all works great so far, except when i have two or more filters and set them to a combination that returns zero records. Bam ... gridview gone, completely - without a possiblity of changing the filters back. So i googled and found out that i have to subclass gridview in order to always show the headerrow. I found this solution and implemented it with some modifications. The headerrow get's displayed and i can change the filters even if the returned result contains no rows. But finally to my current problem: When i have two or more filters set which return zero rows, and i change back one filter to something that should return rows, the gridview remains empty (although the pager is rendered). I have to completly refresh the page to reset the filters. When debugging, i can see in the overridden CreateChildControls of the grid, that the base method indeed returns 0, but anyway... the gridView.RowCount remains 0 after databinding. Anyone have an idea what's going wrong here?

    Read the article

  • What pseudo-operators exist in Perl 5?

    - by Chas. Owens
    I am currently documenting all of Perl 5's operators (see the perlopref GitHub project) and I have decided to include Perl 5's pseudo-operators as well. To me, a pseudo-operator in Perl is anything that looks like an operator, but is really more than one operator or a some other piece of syntax. I have documented the four I am familiar with already: ()= the countof operator =()= the goatse/countof operator ~~ the scalar context operator }{ the Eskimo-kiss operator What other names exist for these pseudo-operators, and do you know of any pseudo-operators I have missed? =head1 Pseudo-operators There are idioms in Perl 5 that appear to be operators, but are really a combination of several operators or pieces of syntax. These pseudo-operators have the precedence of the constituent parts. =head2 ()= X =head3 Description This pseudo-operator is the list assignment operator (aka the countof operator). It is made up of two items C<()>, and C<=>. In scalar context it returns the number of items in the list X. In list context it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count = ()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats = ()= $string =~ /cat/g; #$cats is now 3 print scalar( ()= f() ), "\n"; #prints "5\n" =head3 See also L</X = Y> and L</X =()= Y> =head2 X =()= Y This pseudo-operator is often called the goatse operator for reasons better left unexamined; it is also called the list assignment or countof operator. It is made up of three items C<=>, C<()>, and C<=>. When X is a scalar variable, the number of items in the list Y is returned. If X is an array or a hash it it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count =()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats =()= $string =~ /cat/g; #$cats is now 3 =head3 See also L</=> and L</()=> =head2 ~~X =head3 Description This pseudo-operator is named the scalar context operator. It is made up of two bitwise negation operators. It provides scalar context to the expression X. It works because the first bitwise negation operator provides scalar context to X and performs a bitwise negation of the result; since the result of two bitwise negations is the original item, the value of the original expression is preserved. With the addition of the Smart match operator, this pseudo-operator is even more confusing. The C<scalar> function is much easier to understand and you are encouraged to use it instead. =head3 Example my @a = qw/a b c d/; print ~~@a, "\n"; #prints 4 =head3 See also L</~X>, L</X ~~ Y>, and L<perlfunc/scalar> =head2 X }{ Y =head3 Description This pseudo-operator is called the Eskimo-kiss operator because it looks like two faces touching noses. It is made up of an closing brace and an opening brace. It is used when using C<perl> as a command-line program with the C<-n> or C<-p> options. It has the effect of running X inside of the loop created by C<-n> or C<-p> and running Y at the end of the program. It works because the closing brace closes the loop created by C<-n> or C<-p> and the opening brace creates a new bare block that is closed by the loop's original ending. You can see this behavior by using the L<B::Deparse> module. Here is the command C<perl -ne 'print $_;'> deparsed: LINE: while (defined($_ = <ARGV>)) { print $_; } Notice how the original code was wrapped with the C<while> loop. Here is the deparsing of C<perl -ne '$count++ if /foo/; }{ print "$count\n"'>: LINE: while (defined($_ = <ARGV>)) { ++$count if /foo/; } { print "$count\n"; } Notice how the C<while> loop is closed by the closing brace we added and the opening brace starts a new bare block that is closed by the closing brace that was originally intended to close the C<while> loop. =head3 Example # count unique lines in the file FOO perl -nle '$seen{$_}++ }{ print "$_ => $seen{$_}" for keys %seen' FOO # sum all of the lines until the user types control-d perl -nle '$sum += $_ }{ print $sum' =head3 See also L<perlrun> and L<perlsyn> =cut

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • InputDispatcher Error

    - by StarDust
    INFO/ActivityManager(68): Process com.example (pid 390) has died. ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=165, events=0x8 Can anyone tell what may be the reason behind this error? I've ported a native code on the Android-ndk. One thing I noticed regarding fd (that may be some reason :S) My code uses fd_sets which was defined in winsock2.h But I didn't find fd_sets defined in android-ndk. So I had included "select.h" where fd_set is a typedef in the android-ndk: typedef __kernel_fd_set fd_set; Here is the log cat: 04-06 11:15:32.405: INFO/DEBUG(31): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 04-06 11:15:32.405: INFO/DEBUG(31): Build fingerprint: 'generic/sdk/generic:2.3.3/GRI34/101070:eng/test-keys' 04-06 11:15:32.415: INFO/DEBUG(31): pid: 335, tid: 348 >>> com.example <<< 04-06 11:15:32.426: INFO/DEBUG(31): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadbaad 04-06 11:15:32.426: INFO/DEBUG(31): r0 deadbaad r1 0000000c r2 00000027 r3 00000000 04-06 11:15:32.445: INFO/DEBUG(31): r4 00000080 r5 afd46668 r6 0000a000 r7 00000078 04-06 11:15:32.445: INFO/DEBUG(31): r8 804ab00d r9 002a9778 10 00100000 fp 00000001 04-06 11:15:32.445: INFO/DEBUG(31): ip ffffffff sp 44295d10 lr afd19375 pc afd15ef0 cpsr 00000030 04-06 11:15:32.756: INFO/DEBUG(31): #00 pc 00015ef0 /system/lib/libc.so 04-06 11:15:32.756: INFO/DEBUG(31): #01 pc 00013852 /system/lib/libc.so 04-06 11:15:32.767: INFO/DEBUG(31): code around pc: 04-06 11:15:32.785: INFO/DEBUG(31): afd15ed0 68241c23 d1fb2c00 68dae027 d0042a00 04-06 11:15:32.785: INFO/DEBUG(31): afd15ee0 20014d18 6028447d 48174790 24802227 04-06 11:15:32.785: INFO/DEBUG(31): afd15ef0 f7f57002 2106eb56 ec92f7f6 0563aa01 04-06 11:15:32.796: INFO/DEBUG(31): afd15f00 60932100 91016051 1c112006 e818f7f6 04-06 11:15:32.807: INFO/DEBUG(31): afd15f10 2200a905 f7f62002 f7f5e824 2106eb42 04-06 11:15:32.815: INFO/DEBUG(31): code around lr: 04-06 11:15:32.815: INFO/DEBUG(31): afd19354 b0834a0d 589c447b 26009001 686768a5 04-06 11:15:32.825: INFO/DEBUG(31): afd19364 220ce008 2b005eab 1c28d003 47889901 04-06 11:15:32.836: INFO/DEBUG(31): afd19374 35544306 d5f43f01 2c006824 b003d1ee 04-06 11:15:32.836: INFO/DEBUG(31): afd19384 bdf01c30 000281a8 ffffff88 1c0fb5f0 04-06 11:15:32.846: INFO/DEBUG(31): afd19394 43551c3d a904b087 1c16ac01 604d9004 04-06 11:15:32.856: INFO/DEBUG(31): stack: 04-06 11:15:32.856: INFO/DEBUG(31): 44295cd0 00000408 04-06 11:15:32.867: INFO/DEBUG(31): 44295cd4 afd18407 /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cd8 afd4270c /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cdc afd426b8 /system/lib/libc.so 04-06 11:15:32.885: INFO/DEBUG(31): 44295ce0 00000000 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce4 afd19375 /system/lib/libc.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce8 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295cec afd183d9 /system/lib/libc.so 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf0 00000078 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf4 00000000 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf8 afd46668 04-06 11:15:32.906: INFO/DEBUG(31): 44295cfc 0000a000 [heap] 04-06 11:15:32.916: INFO/DEBUG(31): 44295d00 00000078 04-06 11:15:32.927: INFO/DEBUG(31): 44295d04 afd18677 /system/lib/libc.so 04-06 11:15:32.927: INFO/DEBUG(31): 44295d08 df002777 04-06 11:15:32.945: INFO/DEBUG(31): 44295d0c e3a070ad 04-06 11:15:32.945: INFO/DEBUG(31): #00 44295d10 002c43a0 [heap] 04-06 11:15:32.945: INFO/DEBUG(31): 44295d14 002a9900 [heap] 04-06 11:15:32.956: INFO/DEBUG(31): 44295d18 afd46608 04-06 11:15:32.966: INFO/DEBUG(31): 44295d1c afd11010 /system/lib/libc.so 04-06 11:15:32.976: INFO/DEBUG(31): 44295d20 002c4298 [heap] 04-06 11:15:32.976: INFO/DEBUG(31): 44295d24 fffffbdf 04-06 11:15:33.006: INFO/DEBUG(31): 44295d28 000000da 04-06 11:15:33.006: INFO/DEBUG(31): 44295d2c afd46450 04-06 11:15:33.006: INFO/DEBUG(31): 44295d30 000001b4 04-06 11:15:33.026: INFO/DEBUG(31): 44295d34 afd13857 /system/lib/libc.so 04-06 11:15:33.026: INFO/DEBUG(31): #01 44295d38 afd46450 04-06 11:15:33.035: INFO/DEBUG(31): 44295d3c afd13857 /system/lib/libc.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d40 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d44 44295e8c 04-06 11:15:33.056: INFO/DEBUG(31): 44295d48 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d4c 804bfec3 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d50 002c43a0 [heap] 04-06 11:15:33.066: INFO/DEBUG(31): 44295d54 44295e8c 04-06 11:15:33.066: INFO/DEBUG(31): 44295d58 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.076: INFO/DEBUG(31): 44295d5c 002a9778 [heap] 04-06 11:15:33.085: INFO/DEBUG(31): 44295d60 00000078 04-06 11:15:33.085: INFO/DEBUG(31): 44295d64 afd14769 /system/lib/libc.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d68 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d6c 805d9763 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d70 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d74 8051dc35 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d78 0000003a 04-06 11:15:33.085: INFO/DEBUG(31): 44295d7c 002a9900 [heap] 04-06 11:15:37.126: DEBUG/Zygote(33): Process 335 terminated by signal (11) 04-06 11:15:37.146: INFO/ActivityManager(68): Process com.example (pid 335) has died. 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! 04-06 11:15:37.185: INFO/BootReceiver(68): Copying /data/tombstones/tombstone_09 to DropBox (SYSTEM_TOMBSTONE) 04-06 11:15:37.576: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 266K, 47% free 4404K/8199K, external 3520K/3903K, paused 306ms 04-06 11:15:37.835: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 203K, 47% free 4457K/8391K, external 3520K/3903K, paused 120ms 04-06 11:15:37.886: INFO/WindowManager(68): WIN DEATH: Window{406f03a0 com.example/com.example.afeTest paused=false} 04-06 11:15:38.095: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 67K, 47% free 4518K/8391K, external 3511K/3903K, paused 94ms 04-06 11:15:38.095: INFO/dalvikvm-heap(68): Grow heap (frag case) to 10.575MB for 196628-byte allocation 04-06 11:15:38.126: DEBUG/dalvikvm(126): GC_EXPLICIT freed 110K, 51% free 2903K/5895K, external 4701K/5293K, paused 2443ms 04-06 11:15:38.217: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 1K, 46% free 4708K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.225: INFO/WindowManager(68): WIN DEATH: Window{406f72f8 com.example/com.example.afeTest paused=false} 04-06 11:15:38.405: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 492K, 50% free 4345K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.485: ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=164, events=0x8

    Read the article

  • Drupal Ctools Form Wizard in a Block

    - by Iamjon
    Hi everyone I created a custom module that has a Ctools multi step form. It's basically a copy of http://www.nicklewis.org/using-chaos-tools-form-wizard-build-multistep-forms-drupal-6. The form works. I can see it if I got to the url i made for it. For the life of me I can't get the multistep form to show up in a block. Any clues? /** * Implementation of hook_block() * */ function mycrazymodule_block($op='list', $delta=0, $edit=array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('SFT Getting Started'); $blocks[1]['info'] = t('SFT Contact US'); $blocks[2]['info'] = t('SFT News Letter'); return $blocks; case 'view': switch ($delta){ case '0': $block['subject'] = t('SFT Getting Started Subject'); $block['content'] = mycrazymodule_wizard(); break; case '1': $block['subject'] = t('SFT Contact US Subject'); $block['content'] = t('SFT Contact US content'); break; case '2': $block['subject'] = t('SFT News Letter Subject'); $block['content'] = t('SFT News Letter cONTENT'); break; } return $block; } } /** * Implementation of hook_menu(). */ function mycrazymodule_menu() { $items['hellocowboy'] = array( 'title' = 'Two Step Form', 'page callback' = 'mycrazymodule_wizard', 'access arguments' = array('access content') ); return $items; } /** * menu callback for the multistep form * step is whatever arg one is -- and will refer to the keys listed in * $form_info['order'], and $form_info['forms'] arrays */ function mycrazymodule_wizard() { $step = arg(1); // required includes for wizard $form_state = array(); ctools_include('wizard'); ctools_include('object-cache'); // The array that will hold the two forms and their options $form_info = array( 'id' = 'getting_started', 'path' = "hellocowboy/%step", 'show trail' = FALSE, 'show back' = FALSE, 'show cancel' = false, 'show return' =false, 'next text' = 'Submit', 'next callback' = 'getting_started_add_subtask_next', 'finish callback' = 'getting_started_add_subtask_finish', 'return callback' = 'getting_started_add_subtask_finish', 'order' = array( 'basic' = t('Step 1: Basic Info'), 'lecture' = t('Step 2: Choose Lecture'), ), 'forms' = array( 'basic' = array( 'form id' = 'basic_info_form' ), 'lecture' = array( 'form id' = 'choose_lecture_form' ), ), ); $form_state = array( 'cache name' = NULL, ); // no matter the step, you will load your values from the callback page $getstart = getting_started_get_page_cache(NULL); if (!$getstart) { // set form to first step -- we have no data $step = current(array_keys($form_info['order'])); $getstart = new stdClass(); //create cache ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); //print_r($getstart); } //THIS IS WHERE WILL STORE ALL FORM DATA $form_state['getting_started_obj'] = $getstart; // and this is the witchcraft that makes it work $output = ctools_wizard_multistep_form($form_info, $step, $form_state); return $output; } function basic_info_form(&$form, &$form_state){ $getstart = &$form_state['getting_started_obj']; $form['firstname'] = array( '#weight' = '0', '#type' = 'textfield', '#title' = t('firstname'), '#size' = 60, '#maxlength' = 255, '#required' = TRUE, ); $form['lastname'] = array( '#weight' = '1', '#type' = 'textfield', '#title' = t('lastname'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['phone'] = array( '#weight' = '2', '#type' = 'textfield', '#title' = t('phone'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['email'] = array( '#weight' = '3', '#type' = 'textfield', '#title' = t('email'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['newsletter'] = array( '#weight' = '4', '#type' = 'checkbox', '#title' = t('I would like to receive the newsletter'), '#required' = TRUE, '#return_value' = 1, '#default_value' = 1, ); $form_state['no buttons'] = TRUE; } function basic_info_form_validate(&$form, &$form_state){ $email = $form_state['values']['email']; $phone = $form_state['values']['phone']; if(valid_email_address($email) != TRUE){ form_set_error('Dude you have an error', t('Where is your email?')); } //if (strlen($phone) 0 && !ereg('^[0-9]{1,3}-[0-9]{3}-[0-9]{3,4}-[0-9]{3,4}$', $phone)) { //form_set_error('Dude the phone', t('Phone number must be in format xxx-xxx-nnnn-nnnn.')); //} } function basic_info_form_submit(&$form, &$form_state){ //Grab the variables $firstname =check_plain ($form_state['values']['firstname']); $lastname = check_plain ($form_state['values']['lastname']); $email = check_plain ($form_state['values']['email']); $phone = check_plain ($form_state['values']['phone']); $newsletter = $form_state['values']['newsletter']; //Send the form and Grab the lead id $leadid = send_first_form($lastname, $firstname, $email,$phone, $newsletter); //Put into form $form_state['getting_started_obj']-firstname = $firstname; $form_state['getting_started_obj']-lastname = $lastname; $form_state['getting_started_obj']-email = $email; $form_state['getting_started_obj']-phone = $phone; $form_state['getting_started_obj']-newsletter = $newsletter; $form_state['getting_started_obj']-leadid = $leadid; } function choose_lecture_form(&$form, &$form_state){ $one = 'event 1' $two = 'event 2' $three = 'event 3' $getstart = &$form_state['getting_started_obj']; $form['lecture'] = array( '#weight' = '5', '#default_value' = 'two', '#options' = array( 'one' = $one, 'two' = $two, 'three' = $three, ), '#type' = 'radios', '#title' = t('Select Workshop'), '#required' = TRUE, ); $form['attendees'] = array( '#weight' = '6', '#default_value' = 'one', '#options' = array( 'one' = t('I will be arriving alone'), 'two' =t('I will be arriving with a guest'), ), '#type' = 'radios', '#title' = t('Attendees'), '#required' = TRUE, ); $form_state['no buttons'] = TRUE; } /** * Same idea as previous steps submit * */ function choose_lecture_form_submit(&$form, &$form_state) { $workshop = $form_state['values']['lecture']; $leadid = $form_state['getting_started_obj']-leadid; $attendees = $form_state['values']['attendees']; $form_state['getting_started_obj']-lecture = $workshop; $form_state['getting_started_obj']-attendees = $attendees; send_second_form($workshop, $attendees, $leadid); } /*----PART 3 CTOOLS CALLBACKS -- these usually don't have to be very unique ---------------------- */ /** * Callback generated when the add page process is finished. * this is where you'd normally save. */ function getting_started_add_subtask_finish(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; drupal_set_message('mycrazymodule '.$getstart-name.' successfully deployed' ); //Get id // Clear the cache ctools_object_cache_clear('getting_started', $form_state['cache name']); $form_state['redirect'] = 'hellocowboy'; } /** * Callback for the proceed step * */ function getting_started_add_subtask_next(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; $cache = ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); } /*----PART 4 CTOOLS FORM STORAGE HANDLERS -- these usually don't have to be very unique ---------------------- */ /** * Remove an item from the object cache. */ function getting_started_clear_page_cache($name) { ctools_object_cache_clear('getting_started', $name); } /** * Get the cached changes to a given task handler. */ function getting_started_get_page_cache($name) { $cache = ctools_object_cache_get('getting_started', $name); return $cache; } //Salesforce Functions function send_first_form($lastname, $firstname,$email,$phone, $newsletter){ $send = array("LastName" = $lastname , "FirstName" = $firstname, "Email" = $email ,"Phone" = $phone , "Newsletter__c" =$newsletter ); $sf = salesforce_api_connect(); $response = $sf-client-create(array($send), 'Lead'); dpm($response); return $response-id; } function send_second_form($workshop, $attendees, $leadid){ $send = array("Id" = $leadid , "Number_Of_Pepole__c" = "2" ); $sf = salesforce_api_connect(); $response = $sf-client-update(array($send), 'Lead'); dpm($response, 'the final response'); return $response-id; }

    Read the article

  • Why is phpseclib producing incompatible certs?

    - by chacham15
    Why is it that when I try to use a certificate/key pair generated from phpseclib, the OpenSSL server code errors out? Certs/Keys generated from OpenSSL work fine. How do I fix this? Certificate/Key Generation taken straight from phpseclib documentation: <?php include('File/X509.php'); include('Crypt/RSA.php'); // create private key / x.509 cert for stunnel / website $privKey = new Crypt_RSA(); extract($privKey-createKey()); $privKey-loadKey($privatekey); $pubKey = new Crypt_RSA(); $pubKey-loadKey($publickey); $pubKey-setPublicKey(); $subject = new File_X509(); $subject-setDNProp('id-at-organizationName', 'phpseclib demo cert'); //$subject-removeDNProp('id-at-organizationName'); $subject-setPublicKey($pubKey); $issuer = new File_X509(); $issuer-setPrivateKey($privKey); $issuer-setDN($subject-getDN()); $x509 = new File_X509(); //$x509-setStartDate('-1 month'); // default: now //$x509-setEndDate('+1 year'); // default: +1 year $result = $x509-sign($issuer, $subject); echo "the stunnel.pem contents are as follows:\r\n\r\n"; echo $privKey-getPrivateKey(); echo "\r\n"; echo $x509-saveX509($result); echo "\r\n"; ? OpenSSL sample SSL server taken straight from OpenSSL example code: #include <stdio.h #include <unistd.h #include <stdlib.h #include <memory.h #include <errno.h #include <sys/types.h #include <sys/socket.h #include <netinet/in.h #include <arpa/inet.h #include <netdb.h #include <openssl/rsa.h /* SSLeay stuff */ #include <openssl/crypto.h #include <openssl/x509.h #include <openssl/pem.h #include <openssl/ssl.h #include <openssl/err.h #define CHK_NULL(x) if ((x)==NULL) exit (1) #define CHK_ERR(err,s) if ((err)==-1) { perror(s); exit(1); } #define CHK_SSL(err) if ((err)==-1) { ERR_print_errors_fp(stderr); exit(2); } int main (int argc, char *argv[]) { int err; int listen_sd; int sd; struct sockaddr_in sa_serv; struct sockaddr_in sa_cli; size_t client_len; SSL_CTX* ctx; SSL* ssl; X509* client_cert; char* str; char buf [4096]; SSL_METHOD *meth; /* SSL preliminaries. We keep the certificate and key with the context. */ SSL_load_error_strings(); SSLeay_add_ssl_algorithms(); meth = SSLv23_server_method(); ctx = SSL_CTX_new (meth); if (!ctx) { ERR_print_errors_fp(stderr); exit(2); } if (SSL_CTX_use_certificate_file(ctx, argv[1], SSL_FILETYPE_PEM) <= 0) { ERR_print_errors_fp(stderr); exit(3); } if (SSL_CTX_use_PrivateKey_file(ctx, argv[2], SSL_FILETYPE_PEM) <= 0) { ERR_print_errors_fp(stderr); exit(4); } if (!SSL_CTX_check_private_key(ctx)) { fprintf(stderr,"Private key does not match the certificate public key\n"); exit(5); } /* ----------------------------------------------- */ /* Prepare TCP socket for receiving connections */ listen_sd = socket (AF_INET, SOCK_STREAM, 0); CHK_ERR(listen_sd, "socket"); memset (&sa_serv, '\0', sizeof(sa_serv)); sa_serv.sin_family = AF_INET; sa_serv.sin_addr.s_addr = INADDR_ANY; sa_serv.sin_port = htons (1111); /* Server Port number */ err = bind(listen_sd, (struct sockaddr*) &sa_serv, sizeof (sa_serv)); CHK_ERR(err, "bind"); /* Receive a TCP connection. */ err = listen (listen_sd, 5); CHK_ERR(err, "listen"); client_len = sizeof(sa_cli); sd = accept (listen_sd, (struct sockaddr*) &sa_cli, (unsigned int*)&client_len); CHK_ERR(sd, "accept"); close (listen_sd); printf ("Connection from %lx, port %x\n", sa_cli.sin_addr.s_addr, sa_cli.sin_port); /* ----------------------------------------------- */ /* TCP connection is ready. Do server side SSL. */ ssl = SSL_new (ctx); CHK_NULL(ssl); SSL_set_fd (ssl, sd); err = SSL_accept (ssl); CHK_SSL(err); /* Get the cipher - opt */ printf ("SSL connection using %s\n", SSL_get_cipher (ssl)); /* Get client's certificate (note: beware of dynamic allocation) - opt */ client_cert = SSL_get_peer_certificate (ssl); if (client_cert != NULL) { printf ("Client certificate:\n"); str = X509_NAME_oneline (X509_get_subject_name (client_cert), 0, 0); CHK_NULL(str); printf ("\t subject: %s\n", str); OPENSSL_free (str); str = X509_NAME_oneline (X509_get_issuer_name (client_cert), 0, 0); CHK_NULL(str); printf ("\t issuer: %s\n", str); OPENSSL_free (str); /* We could do all sorts of certificate verification stuff here before deallocating the certificate. */ X509_free (client_cert); } else printf ("Client does not have certificate.\n"); /* DATA EXCHANGE - Receive message and send reply. */ err = SSL_read (ssl, buf, sizeof(buf) - 1); CHK_SSL(err); buf[err] = '\0'; printf ("Got %d chars:'%s'\n", err, buf); err = SSL_write (ssl, "I hear you.", strlen("I hear you.")); CHK_SSL(err); /* Clean up. */ close (sd); SSL_free (ssl); SSL_CTX_free (ctx); return 1; } /* EOF - serv.cpp */ This program errors with: (the error is printed out on the call to SSL_write) Connection from 100007f, port a7ff SSL connection using (NONE) Client does not have certificate. Got 0 chars:'' 82673:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-44/src/ssl/s3_pkt.c:539: Here is the relevant code referenced by the error: int ssl3_write_bytes(SSL *s, int type, const void *buf_, int len) { const unsigned char *buf=buf_; unsigned int tot,n,nw; int i; s-rwstate=SSL_NOTHING; tot=s-s3-wnum; s-s3-wnum=0; if (SSL_in_init(s) && !s-in_handshake) { i=s-handshake_func(s); if (i < 0) return(i); if (i == 0) { SSLerr(SSL_F_SSL3_WRITE_BYTES,SSL_R_SSL_HANDSHAKE_FAILURE); return -1; } } ...etc

    Read the article

  • Python - pickling fails for numpy.void objects

    - by I82Much
    >>> idmapfile = open("idmap", mode="w") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap") >>> unpickled = pickle.load(idmapfile) >>> unpickled == idMap False idMap[1] {1537: (552, 1, 1537, 17.793827056884766, 3), 1540: (4220, 1, 1540, 19.31205940246582, 3), 1544: (592, 1, 1544, 18.129131317138672, 3), 1675: (529, 1, 1675, 18.347782135009766, 3), 1550: (4048, 1, 1550, 19.31205940246582, 3), 1424: (1528, 1, 1424, 19.744396209716797, 3), 1681: (1265, 1, 1681, 19.596025466918945, 3), 1560: (3457, 1, 1560, 20.530569076538086, 3), 1690: (477, 1, 1690, 17.395542144775391, 3), 1691: (554, 1, 1691, 13.446117401123047, 3), 1436: (3010, 1, 1436, 19.596025466918945, 3), 1434: (3183, 1, 1434, 19.744396209716797, 3), 1441: (3570, 1, 1441, 20.589576721191406, 3), 1435: (476, 1, 1435, 19.640911102294922, 3), 1444: (527, 1, 1444, 17.98480224609375, 3), 1478: (1897, 1, 1478, 19.596025466918945, 3), 1575: (614, 1, 1575, 19.371648788452148, 3), 1586: (2189, 1, 1586, 19.31205940246582, 3), 1716: (3470, 1, 1716, 19.158674240112305, 3), 1590: (2278, 1, 1590, 19.596025466918945, 3), 1463: (991, 1, 1463, 19.31205940246582, 3), 1594: (1890, 1, 1594, 19.596025466918945, 3), 1467: (1087, 1, 1467, 19.31205940246582, 3), 1596: (3759, 1, 1596, 19.744396209716797, 3), 1602: (3011, 1, 1602, 20.530569076538086, 3), 1547: (490, 1, 1547, 17.994071960449219, 3), 1605: (658, 1, 1605, 19.31205940246582, 3), 1606: (1794, 1, 1606, 16.964881896972656, 3), 1719: (1826, 1, 1719, 19.596025466918945, 3), 1617: (583, 1, 1617, 11.894925117492676, 3), 1492: (3441, 1, 1492, 20.500667572021484, 3), 1622: (3215, 1, 1622, 19.31205940246582, 3), 1628: (2761, 1, 1628, 19.744396209716797, 3), 1502: (1563, 1, 1502, 19.596025466918945, 3), 1632: (1108, 1, 1632, 15.457141876220703, 3), 1468: (3779, 1, 1468, 19.596025466918945, 3), 1642: (3970, 1, 1642, 19.744396209716797, 3), 1518: (612, 1, 1518, 18.570245742797852, 3), 1647: (854, 1, 1647, 16.964881896972656, 3), 1650: (2099, 1, 1650, 20.439058303833008, 3), 1651: (540, 1, 1651, 18.552841186523438, 3), 1653: (613, 1, 1653, 19.237197875976563, 3), 1532: (537, 1, 1532, 18.885730743408203, 3)} >>> unpickled[1] {1537: (64880, 1638, 56700, -1.0808743559293829e+18, 152), 1540: (64904, 1638, 0, 0.0, 0), 1544: (54472, 1490, 0, 0.0, 0), 1675: (6464, 1509, 0, 0.0, 0), 1550: (43592, 1510, 0, 0.0, 0), 1424: (43616, 1510, 0, 0.0, 0), 1681: (0, 0, 0, 0.0, 0), 1560: (400, 152, 400, 2.1299736657737219e-43, 0), 1690: (408, 152, 408, 2.7201111331839077e+26, 34), 1435: (424, 152, 61512, 1.0122952080313192e-39, 0), 1436: (400, 152, 400, 20.250289916992188, 3), 1434: (424, 152, 62080, 1.0122952080313192e-39, 0), 1441: (400, 152, 400, 12.250144958496094, 3), 1691: (424, 152, 42608, 15.813941955566406, 3), 1444: (400, 152, 400, 19.625289916992187, 3), 1606: (424, 152, 42432, 5.2947192852601414e-22, 41), 1575: (400, 152, 400, 6.2537390010262572e-36, 0), 1586: (424, 152, 42488, 1.0122601755697111e-39, 0), 1716: (400, 152, 400, 6.2537390010262572e-36, 0), 1590: (424, 152, 64144, 1.0126357235581501e-39, 0), 1463: (400, 152, 400, 6.2537390010262572e-36, 0), 1594: (424, 152, 32672, 17.002994537353516, 3), 1467: (400, 152, 400, 19.750289916992187, 3), 1596: (424, 152, 7176, 1.0124003054161436e-39, 0), 1602: (400, 152, 400, 18.500289916992188, 3), 1547: (424, 152, 7000, 1.0124003054161436e-39, 0), 1605: (400, 152, 400, 20.500289916992188, 3), 1478: (424, 152, 42256, -6.0222748507426518e+30, 222), 1719: (400, 152, 400, 6.2537390010262572e-36, 0), 1617: (424, 152, 16472, 1.0124283313854301e-39, 0), 1492: (400, 152, 400, 6.2537390010262572e-36, 0), 1622: (424, 152, 35304, 1.0123190301052127e-39, 0), 1628: (400, 152, 400, 6.2537390010262572e-36, 0), 1502: (424, 152, 63152, 19.627988815307617, 3), 1632: (400, 152, 400, 19.375289916992188, 3), 1468: (424, 152, 38088, 1.0124213248931084e-39, 0), 1642: (400, 152, 400, 6.2537390010262572e-36, 0), 1518: (424, 152, 63896, 1.0127436235399031e-39, 0), 1647: (400, 152, 400, 6.2537390010262572e-36, 0), 1650: (424, 152, 53424, 16.752857208251953, 3), 1651: (400, 152, 400, 19.250289916992188, 3), 1653: (424, 152, 50624, 1.0126497365427934e-39, 0), 1532: (400, 152, 400, 6.2537390010262572e-36, 0)} The keys come out fine, the values are screwed up. I tried same thing loading file in binary mode; didn't fix the problem. Any idea what I'm doing wrong? Edit: Here's the code with binary. Note that the values are different in the unpickled object. >>> idmapfile = open("idmap", mode="wb") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap", mode="rb") >>> unpickled = pickle.load(idmapfile) >>> unpickled==idMap False >>> unpickled[1] {1537: (12176, 2281, 56700, -1.0808743559293829e+18, 152), 1540: (0, 0, 15934, 2.7457842047810522e+26, 108), 1544: (400, 152, 400, 4.9518498821046956e+27, 53), 1675: (408, 152, 408, 2.7201111331839077e+26, 34), 1550: (456, 152, 456, -1.1349175514578289e+18, 152), 1424: (432, 152, 432, 4.5939047815653343e-40, 11), 1681: (408, 152, 408, 2.1299736657737219e-43, 0), 1560: (376, 152, 376, 2.1299736657737219e-43, 0), 1690: (376, 152, 376, 2.1299736657737219e-43, 0), 1435: (376, 152, 376, 2.1299736657737219e-43, 0), 1436: (376, 152, 376, 2.1299736657737219e-43, 0), 1434: (376, 152, 376, 2.1299736657737219e-43, 0), 1441: (376, 152, 376, 2.1299736657737219e-43, 0), 1691: (376, 152, 376, 2.1299736657737219e-43, 0), 1444: (376, 152, 376, 2.1299736657737219e-43, 0), 1606: (25784, 2281, 376, -3.2883343074537754e+26, 34), 1575: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1586: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1716: (24240, 2281, 376, -3.0093091599657311e-35, 26), 1590: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1463: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1594: (24240, 2281, 376, -4123208450048.0, 196), 1467: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1596: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1602: (25784, 2281, 376, -5.9963281433905448e+26, 76), 1547: (25784, 2281, 376, -218106240.0, 139), 1605: (25784, 2281, 376, -3.7138649803377281e+27, 56), 1478: (376, 152, 376, 2.1299736657737219e-43, 0), 1719: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1617: (25784, 2281, 376, -1.4411779941597184e+17, 237), 1492: (25784, 2281, 376, 2.8596493694487798e-30, 80), 1622: (25784, 2281, 376, 184686084096.0, 93), 1628: (1336, 152, 1336, 3.1691839245470052e+29, 179), 1502: (1272, 152, 1272, -5.2042207205116645e-17, 99), 1632: (1208, 152, 1208, 2.1299736657737219e-43, 0), 1468: (1144, 152, 1144, 2.1299736657737219e-43, 0), 1642: (1080, 152, 1080, 2.1299736657737219e-43, 0), 1518: (1016, 152, 1016, 4.0240902787680023e+35, 145), 1647: (952, 152, 952, -985172619034624.0, 237), 1650: (888, 152, 888, 12094787289088.0, 66), 1651: (824, 152, 824, 2.1299736657737219e-43, 0), 1653: (760, 152, 760, 0.00018310768064111471, 238), 1532: (696, 152, 696, 8.8978061885676389e+26, 125)} OK I've isolated the problem, but don't know why it's so. First, apparently what I'm pickling are not tuples (though they look like it), but instead numpy.void types. Here is a series to illustrate the problem. first = run0.detections[0] >>> first (1, 19, 1578, 82.637763977050781, 1) >>> type(first) <type 'numpy.void'> >>> firstTuple = tuple(first) >>> theFile = open("pickleTest", "w") >>> pickle.dump(first, theFile) >>> theTupleFile = open("pickleTupleTest", "w") >>> pickle.dump(firstTuple, theTupleFile) >>> theFile.close() >>> theTupleFile.close() >>> first (1, 19, 1578, 82.637763977050781, 1) >>> firstTuple (1, 19, 1578, 82.637764, 1) >>> theFile = open("pickleTest", "r") >>> theTupleFile = open("pickleTupleTest", "r") >>> unpickledTuple = pickle.load(theTupleFile) >>> unpickledVoid = pickle.load(theFile) >>> type(unpickledVoid) <type 'numpy.void'> >>> type(unpickledTuple) <type 'tuple'> >>> unpickledTuple (1, 19, 1578, 82.637764, 1) >>> unpickledTuple == firstTuple True >>> unpickledVoid == first False >>> unpickledVoid (7936, 1705, 56700, -1.0808743559293829e+18, 152) >>> first (1, 19, 1578, 82.637763977050781, 1)

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Data Modeling: Logical Modeling Exercise

    - by swisscheese
    In trying to learn the art of data storage I have been trying to take in as much solid information as possible. PerformanceDBA posted some really helpful tutorials/examples in the following posts among others: is my data normalized? and Relational table naming convention. I already asked a subset question of this model here. So to make sure I understood the concepts he presented and I have seen elsewhere I wanted to take things a step or two further and see if I am grasping the concepts. Hence the purpose of this post, which hopefully others can also learn from. Everything I present is conceptual to me and for learning rather than applying it in some production system. It would be cool to get some input from PerformanceDBA also since I used his models to get started, but I appreciate all input given from anyone. As I am new to databases and especially modeling I will be the first to admit that I may not always ask the right questions, explain my thoughts clearly, or use the right verbage due to lack of expertise on the subject. So please keep that in mind and feel free to steer me in the right direction if I head off track. If there is enough interest in this I would like to take this from the logical to physical phases to show the evolution of the process and share it here on Stack. I will keep this thread for the Logical Diagram though and start new one for the additional steps. For my understanding I will be building a MySQL DB in the end to run some tests and see if what I came up with actually works. Here is the list of things that I want to capture in this conceptual model. Edit for V1.2 The purpose of this is to list Bands, their members, and the Events that they will be appearing at, as well as offer music and other merchandise for sale Members will be able to match up with friends Members can write reviews on the Bands, their music, and their events. There can only be one review per member on a given item, although they can edit their reviews and history will be maintained. BandMembers will have the chance to write a single Comment on Reviews about the Band they are associated with. Collectively as a Band only one Comment is allowed per Review. Members can then rate all Reviews and Comments but only once per given instance Members can select their favorite Bands, music, Merchandise, and Events Bands, Songs, and Events will be categorized into the type of Genre that they are and then further subcategorized into a SubGenre if necessary. It is ok for a Band or Event to fall into more then one Genre/SubGenre combination. Event date, time, and location will be posted for a given band and members can show that they will be attending the Event. An Event can be comprised of more than one Band, and multiple Events can take place at a single location on the same day Every party will be tied to at least one address and address history shall be maintained. Each party could also be tied to more then one address at a time (i.e. billing, shipping, physical) There will be stored profiles for Bands, BandMembers, and general members. So there it is, maybe a bit involved but could be a great learning tool for many hopefully as the process evolves and input is given by the community. Any input? EDIT v1.1 In response to PerformanceDBA U.3) That means no merchandise other than Band merchandise in the database. Correct ? That was my original thought but you got me thinking. Maybe the site would want to sell its own merchandise or even other merchandise from the bands. Not sure a mod to make for that. Would it require an entire rework of the Catalog section or just the identifying relationship that exists with the Band? Attempted a mod to sell both complete albums or song. Either way they would both be in electronic format only available for download. That is why I listed an Album as being comprised of Songs rather then 2 separate entities. U.5) I understand what you bring up about the circular relation with Favorite. I would like to get to this “It is either one Entity with some form of differentiation (FavoriteType) which identifies its treatment” but how to is not clear to me. What am I missing here? u.6) “Business Rules This is probably the only area you are weak in.” Thanks for the honest response. I will readdress these but I hope to clear up some confusion in my head first with the responses I have posted back to you. Q.1) Yes I would like to have Accepted, Rejected, and Blocked. I am not sure what you are referring to as to how this would change the logical model? Q.2) A person does not have to be a User. They can exist only as a BandMember. Is that what you are asking? Minor Issue Zero, One, or More…Oops I admit I forgot to give this attention when building the model. I am submitting this version as is and will address in a future version. I need to read up more on Constraint Checking to make sure I am understanding things. M.4) Depends if you envision OrderPurchase in the future. Can you expand as to what you mean here? EDIT V1.2 In response to PerformanceDBA input... Lessons learned. I was mixing the concept of Identifying / Non-Identifying and Cardinality (i.e. Genre / SubGenre), and doing so inconsistently to make things worse. Associative Tables are not required in Logical Diagrams as their many-to-many relationships can be depicted and then expanded in the Physical Model. I was overlooking the Cardinality in a lot of the relationships The importance of reading through relationships using effective Verb Phrases to reassure I am modeling what I want to accomplish. U.2) In the concept of this model it is only required to track a Venue as a location for an Event. No further data needs to be collected. With that being said Events will take place on a given EventDate and will be hosted at a Venue. Venues will host multiple events and possibly multiple events on a given date. In my new model my thinking was that EventDate is already tied to Event . Therefore, Venue will not need a relationship with EventDate. The 5th and 6th bullets you have listed under U.2) leave me questioning my thinking though. Am I missing something here? U.3) Is it time to move the link between Item and Band up to Item and Party instead? With the current design I don't see a possibility to sell merchandise not tied to the band as you have brought up. U.5) I left as per your input rather than making it a discrete Supertype/Subtype Relationship as I don’t see a benefit of having that type of roll up. Additional Revisions AR.1) After going through the exercise for FavoriteItem, I feel that Item to Review requires a many-to-many relationship so that is indicated. Necessary? Ok here we go for v1.3 I took a few days on this version, going back and forth with my design. Once the logical process is complete, as I want to see if I am on the right track, I will go through in depth what I had learned and the troubles I faced as a beginner going through this process. The big point for this version was it took throwing in some Keys to help see what I was missing in the past. Going through the process of doing a matrix proved to be of great help also. Regardless of anything, if it wasn't for the input given by PerformanceDBA I would still be a lost soul wondering in the dark. Who knows my current design might reaffirm that I still am, but I have learned a lot so I am know I at least have a flashlight in my hand. At this point in time I admit that I am still confused about identifying and non-identifying relationships. In my model I had to use non-identifying relationships with non nulls just to join the relationships I wanted to model. In reading a lot on the subject there seems to be a lot of disagreement and indecisiveness on the subject so I did what I thought represented the right things in my model. When to force (identifying) and when to be free (non-identifying)? Anyone have inputs? EDIT V1.4 Ok took the V1.3 inputs and cleaned things up for this V1.4 Currently working on a V1.5 to include attributes.

    Read the article

  • WiX major upgrade refuses to replace existing file!

    - by Joshua
    Hello! I have inherited this project with a WiX installer, and am required to make this version usefully upgrade the previous one! My problem comes in replacing the database files with new versions. No, the problem is not that they are locked, I can replace them manually, and in fact now ONE of them is replaced, while the other is not. Please, please tell me what I'm doing wrong here. I've tried several other solutions (including registry keys as KeyPath instead of CompanionFile) but nothing is quite working. Here is (most of) the code of the .WXS file: <Product Id='$(var.ProductCode)' UpgradeCode='$(var.UpgradeCode)' Name="Pathways" Version='$(var.ProductVersion)' Manufacturer='$(var.Manufacturer)' Language='1033'> <Package Id="*" Description="Pathways Directory Software" InstallerVersion="301" Compressed="yes" /> <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> <Media Id="1" Cabinet="Pathways.cab" EmbedCab="yes" /> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="no" Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" /> <UpgradeVersion Minimum="$(var.ProductVersion)" IncludeMinimum="yes" OnlyDetect="no" Language="1033" Property="NEWAPPFOUND" /> </Upgrade> <Property Id="ALLUSERS">2</Property> <!-- directories --> <Directory Id="TARGETDIR" Name="SourceDir"> <!-- program files directory --> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="Pathways"/> </Directory> <!-- application data directory --> <Directory Id="CommonAppDataFolder" Name="CommonAppData"> <Directory Id="CommonAppDataPathways" Name="Pathways" /> </Directory> <!-- start menu program directory --> <Directory Id="ProgramMenuFolder"> <Directory Id="ProgramsMenuPathwaysFolder" Name="Pathways" /> </Directory> <!-- desktop directory --> <Directory Id="DesktopFolder" /> </Directory> <Icon Id="PathwaysIcon" SourceFile="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" /> <!-- components in the reference to the install directory --> <DirectoryRef Id="INSTALLDIR"> <Component Id="Application" Guid="EEE4EB55-A515-4872-A4A5-06D6AB4A06A6"> <File Id="pathwaysExe" Name="Pathways.exe" DiskId="1" Source="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" Vital="yes" KeyPath="yes" Assembly=".net" AssemblyApplication="pathwaysExe" AssemblyManifest="pathwaysExe"> <!--<netfx:NativeImage Id="ngen_Pathways.exe" Platform="32bit" Priority="2"/> --> </File> <File Id="pathwaysChm" Name="Pathways.chm" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Pathways.chm" /> <File Id="publicKeyXml" ShortName="RSAPUBLI.XML" Name="RSAPublicKey.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\RSAPublicKey.xml" Vital="yes" /> <File Id="staticListsXml" ShortName="STATICLI.XML" Name="StaticLists.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\StaticLists.xml" Vital="yes" /> <File Id="axInteropMapPointDll" ShortName="AXMPOINT.DLL" Name="AxInterop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\AxInterop.MapPoint.dll" Vital="yes" /> <File Id="interopMapPointDll" ShortName="INMPOINT.DLL" Name="Interop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="mapPointDll" ShortName="MAPPOINT.DLL" Name="MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="devExpressData63Dll" ShortName="DAAT63.DLL" Name="DevExpress.Data.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Data.v6.3.dll" Vital="yes" /> <File Id="devExpressUtils63Dll" ShortName="UTILS63.DLL" Name="DevExpress.Utils.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Utils.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraBars63Dll" ShortName="BARS63.DLL" Name="DevExpress.XtraBars.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraBars.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraNavBar63Dll" ShortName="NAVBAR63.DLL" Name="DevExpress.XtraNavBar.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraNavBar.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraCharts63Dll" ShortName="CHARTS63.DLL" Name="DevExpress.XtraCharts.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraCharts.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraEditors63Dll" ShortName="EDITOR63.DLL" Name="DevExpress.XtraEditors.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraEditors.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraPrinting63Dll" ShortName="PRINT63.DLL" Name="DevExpress.XtraPrinting.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraPrinting.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraReports63Dll" ShortName="REPORT63.DLL" Name="DevExpress.XtraReports.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraReports.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraRichTextEdit63Dll" ShortName="RICHTE63.DLL" Name="DevExpress.XtraRichTextEdit.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraRichTextEdit.v6.3.dll" Vital="yes" /> <RegistryValue Id="PathwaysInstallDir" Root="HKLM" Key="Software\Tribal Data Resources\Pathways" Name="InstallDir" Action="write" Type="string" Value="[INSTALLDIR]" /> </Component> </DirectoryRef> <!-- application data components --> <DirectoryRef Id="CommonAppDataPathways"> <Component Id="CommonAppDataPathwaysFolderComponent" Guid="087C6F14-E87E-4B57-A7FA-C03FC8488E0D"> <CreateFolder> <Permission User="Everyone" GenericAll="yes" /> </CreateFolder> <RemoveFolder Id="CommonAppDataPathways" On="uninstall" /> <!-- <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes" />--> </Component> <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes"> <File Id="settingsXml" KeyPath="yes" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D" > <!-- <RegistryValue Root="HKLM" Key="Software\TDR\Pathways\Database" Name="installed" Type="integer" Value="1" KeyPath="yes" /> --> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" CompanionFile="pathwaysExe" Vital="yes"/> <File Id="pathwaysLdf" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" CompanionFile="pathwaysExe" Vital="yes"/> </Component> <!-- <Component Id="MDF" Guid="FFB7CE02-B592-4c44-A315-99CF4828E3D9" > <File Id="pathwaysMdf" KeyPath="yes" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" /> </Component> <Component Id="LDF" Guid="9E4E3DCA-A067-47f4-9905-4AD5C35A8025" > <File Id="pathwaysLdf" KeyPath="yes" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" /> </Component> --> </DirectoryRef> <!-- shortcut components --> <DirectoryRef Id="DesktopFolder"> <Component Id="DesktopShortcutComponent" Guid="1BF412BA-9C6B-460D-80ED-8388AC66703F"> <Shortcut Id="DesktopShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Description="Pathways Tribal Directory" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <DirectoryRef Id ="ProgramsMenuPathwaysFolder"> <Component Id="ProgramsMenuShortcutComponent" Guid="83A18245-4C22-4CDC-94E0-B480F80A407D"> <Shortcut Id="ProgramsMenuShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RemoveFolder Id="ProgramsMenuPathwaysFolder" On="uninstall"/> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <Feature Id="App" Title="Pathways Application" Level="1" Description="Pathways software" Display="expand" ConfigurableDirectory="INSTALLDIR" Absent="disallow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Application" /> <ComponentRef Id="CommonAppDataPathwaysFolderComponent" /> <ComponentRef Id="Settings"/> <ComponentRef Id="ProgramsMenuShortcutComponent" /> <Feature Id="Shortcuts" Title="Desktop Shortcut" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="DesktopShortcutComponent" /> </Feature> </Feature> <Feature Id="Data" Title="Database" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Database" /> </Feature> <UIRef Id ="WixUI_FeatureTree"/> <UIRef Id="WixUI_ErrorProgressText"/> <UI> <Error Id="2000">There is a later version of this program installed.</Error> </UI> <CustomAction Id="NewerVersionDetected" Error="2000" /> <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize"/> </InstallExecuteSequence> </Product> Running this installer attempting the upgrade from the previous version ALMOST WORKS. The file that is giving me trouble is the one called "PathwaysMdf". Even though it's Component code is EXACTLY the same as the PathwaysLdf file, that file is replaced, while the MDF is NOT. You can see, commented out, some of the other things I've attempted, some from suggestions on stackoverflow. The entire log file from running the upgrade is located at: http://pastebin.com/ppjhq6Wi THANK YOU! Joshua

    Read the article

  • ASP.Net How to enforce the HTTP get URL format?

    - by Hamish Grubijan
    [Sorry about a messy question. I believe I am targeting .Net 2.0 (for now)] Hi, I am an ASP.NET noob. For starters I am building a page that parses a URL string and populates a table in a database. I want that string to be strictly of the form: http://<server>:<port>/PageName.aspx?A=1&B=2&C=3&D=4&E=5 The order of the arguments do not matter, I just do not want any of them missing, or any extras. Here is what I tried (yes, it is ugly; I just want to get it to work first): #if (DEBUG) // Maps parameter names to their human readable names. // Used for error checking. private static Dictionary<string, string> paramNameToDisplayName = new Dictionary<string, string> { { A, "a"}, { B, "b"}, { C, "c"}, { D, "d"}, { E, "e"}, { F, "f"}, }; [Conditional("DEBUG")] private void validateRequestParameters(HttpRequest request) { bool endResponse = false; // Use foreach var foreach (string expectedParameterName in paramNameToDisplayName.Keys) { if (request[expectedParameterName] == null) { Response.Write(String.Format("No parameter \"{0}\", aka {1} was passed to the configuration generator. Check your URL string / cookie.", expectedParameterName, paramNameToDisplayName[expectedParameterName])); endResponse = true; } } // Use foreach var foreach (string actualParameterName in request.Params) { if (!paramNameToDisplayName.ContainsKey(actualParameterName)) { Response.Write(String.Format("The parameter \"{0}\", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.", actualParameterName)); endResponse = true; } } if (endResponse) { Response.End(); } } #endif and it works ok, except that it complains about all sorts of other stuff: http://localhost:1796/AddStatusUpdate.aspx?X=0 No parameter "A", aka a was passed to the configuration generator. Check your URL string / cookie.No parameter "B", aka b was passed to the configuration generator. Check your URL string / cookie.No parameter "C", aka c was passed to the configuration generator. Check your URL string / cookie.No parameter "D", aka d was passed to the configuration generator. Check your URL string / cookie.No parameter "E", aka e was passed to the configuration generator. Check your URL string / cookie.No parameter "F", aka f was passed to the configuration generator. Check your URL string / cookie.The parameter "X", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ASP.NET_SessionId", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ALL_HTTP", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ALL_RAW", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "APPL_MD_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "APPL_PHYSICAL_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_TYPE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_PASSWORD", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "LOGON_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_COOKIE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_FLAGS", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_KEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SECRETKEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERIALNUMBER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERVER_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERVER_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CONTENT_LENGTH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CONTENT_TYPE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "GATEWAY_INTERFACE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_KEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SECRETKEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SERVER_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SERVER_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "INSTANCE_ID", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "INSTANCE_META_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "LOCAL_ADDR", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "PATH_INFO", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "PATH_TRANSLATED", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "QUERY_STRING", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_ADDR", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_HOST", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_PORT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REQUEST_METHOD", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SCRIPT_NAME", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_NAME", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PORT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PORT_SECURE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PROTOCOL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_SOFTWARE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "URL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_CACHE_CONTROL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_CONNECTION", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_CHARSET", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_ENCODING", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_LANGUAGE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_COOKIE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_HOST", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_USER_AGENT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.Thread was being aborted. Is there some way for me to separate the implicit and the explicit parameters, or is it not doable? Should I even bother? Perhaps the philosophy of get is to just throw away that what is not needed. Thanks!

    Read the article

  • phonegap.js crashes android app

    - by peirix
    I'm having this weird problem, where including the phonegap.js file in my project causes the app to crash on both the android emulator and my phone. I got the latest file from GitHub, so I can't see why this isn't working. This happens even if I try to build the sample project that's included in the PhoneGap download... Console log: [2010-12-17 11:05:14 - sample] Android Launch! [2010-12-17 11:05:14 - sample] adb is running normally. [2010-12-17 11:05:14 - sample] Performing com.phonegap.sample.sample activity launch [2010-12-17 11:05:14 - sample] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'FirstDevice' [2010-12-17 11:05:16 - sample] Uploading sample.apk onto device 'emulator-5554' [2010-12-17 11:05:16 - sample] Installing sample.apk... [2010-12-17 11:05:21 - sample] Success! [2010-12-17 11:05:22 - sample] Starting activity com.phonegap.sample.sample on device emulator-5554 [2010-12-17 11:05:23 - sample] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=com.phonegap.sample/.sample } LogCat: 12-17 11:13:12.533: DEBUG/AndroidRuntime(373): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 12-17 11:13:12.533: DEBUG/AndroidRuntime(373): CheckJNI is ON 12-17 11:13:13.453: DEBUG/AndroidRuntime(373): Calling main entry com.android.commands.pm.Pm 12-17 11:13:13.503: DEBUG/AndroidRuntime(373): Shutting down VM 12-17 11:13:13.513: DEBUG/dalvikvm(373): GC_CONCURRENT freed 101K, 71% free 297K/1024K, external 0K/0K, paused 2ms+2ms 12-17 11:13:13.523: INFO/AndroidRuntime(373): NOTE: attach of thread 'Binder Thread #3' failed 12-17 11:13:13.523: DEBUG/dalvikvm(373): Debugger has detached; object registry had 1 entries 12-17 11:13:14.113: DEBUG/AndroidRuntime(383): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 12-17 11:13:14.113: DEBUG/AndroidRuntime(383): CheckJNI is ON 12-17 11:13:14.853: DEBUG/AndroidRuntime(383): Calling main entry com.android.commands.am.Am 12-17 11:13:14.894: INFO/ActivityManager(62): Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.phonegap.sample/.sample } from pid 383 12-17 11:13:14.973: INFO/ActivityManager(62): Start proc com.phonegap.sample for activity com.phonegap.sample/.sample: pid=391 uid=10031 gids={1006, 3003, 1015} 12-17 11:13:14.983: DEBUG/AndroidRuntime(383): Shutting down VM 12-17 11:13:15.053: DEBUG/dalvikvm(383): GC_CONCURRENT freed 102K, 69% free 319K/1024K, external 0K/0K, paused 2ms+2ms 12-17 11:13:15.093: INFO/AndroidRuntime(383): NOTE: attach of thread 'Binder Thread #3' failed 12-17 11:13:15.143: DEBUG/dalvikvm(383): Debugger has detached; object registry had 1 entries 12-17 11:13:15.523: DEBUG/dalvikvm(33): GC_EXPLICIT freed 11K, 54% free 2520K/5379K, external 716K/1038K, paused 467ms 12-17 11:13:15.663: DEBUG/dalvikvm(33): GC_EXPLICIT freed <1K, 54% free 2520K/5379K, external 716K/1038K, paused 132ms 12-17 11:13:15.772: DEBUG/dalvikvm(33): GC_EXPLICIT freed <1K, 54% free 2520K/5379K, external 716K/1038K, paused 113ms 12-17 11:13:16.333: INFO/ARMAssembler(62): generated scanline__00000177:03515104_00001002_00000000 [ 87 ipp] (110 ins) at [0x43aff6f0:0x43aff8a8] in 686000 ns 12-17 11:13:17.493: INFO/ActivityManager(62): Displayed com.phonegap.sample/.sample: +2s540ms 12-17 11:13:18.163: DEBUG/szipinf(391): Initializing inflate state 12-17 11:13:18.173: DEBUG/szipinf(391): Initializing zlib to inflate 12-17 11:13:18.573: WARN/dalvikvm(391): JNI WARNING: jarray 0x40567330 points to non-array object (Ljava/lang/String;) 12-17 11:13:18.593: INFO/dalvikvm(391): "WebViewCoreThread" prio=5 tid=9 NATIVE 12-17 11:13:18.603: INFO/dalvikvm(391): | group="main" sCount=0 dsCount=0 obj=0x4051b880 self=0x1af760 12-17 11:13:18.603: INFO/dalvikvm(391): | sysTid=400 nice=0 sched=0/0 cgrp=default handle=1778000 12-17 11:13:18.623: INFO/dalvikvm(391): | schedstat=( 851184092 892639082 140 ) 12-17 11:13:18.633: INFO/dalvikvm(391): at android.webkit.LoadListener.nativeFinished(Native Method) 12-17 11:13:18.633: INFO/dalvikvm(391): at android.webkit.LoadListener.nativeFinished(Native Method) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.tearDown(LoadListener.java:1200) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.handleEndData(LoadListener.java:721) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.handleMessage(LoadListener.java:219) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.os.Handler.dispatchMessage(Handler.java:99) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.os.Looper.loop(Looper.java:123) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.webkit.WebViewCore$WebCoreThread.run(WebViewCore.java:629) 12-17 11:13:18.672: INFO/dalvikvm(391): at java.lang.Thread.run(Thread.java:1019) 12-17 11:13:18.672: ERROR/dalvikvm(391): VM aborting 12-17 11:13:18.887: INFO/DEBUG(31): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-17 11:13:18.887: INFO/DEBUG(31): Build fingerprint: 'generic/sdk/generic:2.3/GRH55/79397:eng/test-keys' 12-17 11:13:18.893: INFO/DEBUG(31): pid: 391, tid: 400 >>> com.phonegap.sample <<< 12-17 11:13:18.893: INFO/DEBUG(31): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadd00d 12-17 11:13:18.893: INFO/DEBUG(31): r0 fffffebc r1 deadd00d r2 00000026 r3 00000000 12-17 11:13:18.893: INFO/DEBUG(31): r4 81da45c8 r5 40567330 r6 81d8592c r7 001b2a48 12-17 11:13:18.893: INFO/DEBUG(31): r8 43640b58 r9 42dd1ecc 10 42dd1eb4 fp 4168d82c 12-17 11:13:18.893: INFO/DEBUG(31): ip 81da4728 sp 43640410 lr afd19375 pc 81d45a02 cpsr 20000030 12-17 11:13:19.183: INFO/DEBUG(31): #00 pc 00045a02 /system/lib/libdvm.so 12-17 11:13:19.183: INFO/DEBUG(31): #01 pc 000376fc /system/lib/libdvm.so 12-17 11:13:19.183: INFO/DEBUG(31): #02 pc 000399c4 /system/lib/libdvm.so 12-17 11:13:19.193: INFO/DEBUG(31): #03 pc 0003a4a0 /system/lib/libdvm.so 12-17 11:13:19.203: INFO/DEBUG(31): #04 pc 0032b6d6 /system/lib/libwebcore.so 12-17 11:13:19.203: INFO/DEBUG(31): #05 pc 002a4da4 /system/lib/libwebcore.so 12-17 11:13:19.203: INFO/DEBUG(31): #06 pc 001a6136 /system/lib/libwebcore.so 12-17 11:13:19.213: INFO/DEBUG(31): #07 pc 002a5870 /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #08 pc 00359e36 /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #09 pc 0035d30e /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #10 pc 003638be /system/lib/libwebcore.so 12-17 11:13:19.233: INFO/DEBUG(31): #11 pc 0019f6fa /system/lib/libwebcore.so 12-17 11:13:19.233: INFO/DEBUG(31): #12 pc 0019f780 /system/lib/libwebcore.so 12-17 11:13:19.243: INFO/DEBUG(31): #13 pc 001a3d8a /system/lib/libwebcore.so 12-17 11:13:19.243: INFO/DEBUG(31): #14 pc 000d0dca /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #15 pc 000d0f28 /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #16 pc 000d106e /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #17 pc 000ddef0 /system/lib/libwebcore.so 12-17 11:13:19.263: INFO/DEBUG(31): #18 pc 000ddf62 /system/lib/libwebcore.so 12-17 11:13:19.263: INFO/DEBUG(31): #19 pc 000f3ce2 /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #20 pc 002739ae /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #21 pc 000eac5e /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #22 pc 001b152c /system/lib/libwebcore.so 12-17 11:13:19.283: INFO/DEBUG(31): #23 pc 00017d34 /system/lib/libdvm.so 12-17 11:13:19.283: INFO/DEBUG(31): #24 pc 00048ec0 /system/lib/libdvm.so 12-17 11:13:19.283: INFO/DEBUG(31): #25 pc 00041a6a /system/lib/libdvm.so 12-17 11:13:19.293: INFO/DEBUG(31): #26 pc 0001cf94 /system/lib/libdvm.so 12-17 11:13:19.303: INFO/DEBUG(31): #27 pc 0002209c /system/lib/libdvm.so 12-17 11:13:19.303: INFO/DEBUG(31): #28 pc 00020f90 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #29 pc 0005f328 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #30 pc 0005f54e /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #31 pc 00053b06 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): code around pc: 12-17 11:13:19.313: INFO/DEBUG(31): 81d459e0 447a4479 ed0cf7d1 20004c09 ee34f7d1 12-17 11:13:19.323: INFO/DEBUG(31): 81d459f0 447c4808 6bdb5823 d0002b00 49064798 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a00 700a2226 eea0f7d1 0004355f 0004511d 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a10 0005ebd2 fffffebc deadd00d b510b40e 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a20 4c0a4b09 447bb083 aa05591b 6b5bca02 12-17 11:13:19.323: INFO/DEBUG(31): code around lr: 12-17 11:13:19.333: INFO/DEBUG(31): afd19354 b0834a0d 589c447b 26009001 686768a5 12-17 11:13:19.333: INFO/DEBUG(31): afd19364 220ce008 2b005eab 1c28d003 47889901 12-17 11:13:19.333: INFO/DEBUG(31): afd19374 35544306 d5f43f01 2c006824 b003d1ee 12-17 11:13:19.333: INFO/DEBUG(31): afd19384 bdf01c30 000281a8 ffffff88 1c0fb5f0 12-17 11:13:19.333: INFO/DEBUG(31): afd19394 43551c3d a904b087 1c16ac01 604d9004 12-17 11:13:19.333: INFO/DEBUG(31): stack: 12-17 11:13:19.333: INFO/DEBUG(31): 436403d0 00000015 12-17 11:13:19.333: INFO/DEBUG(31): 436403d4 afd18407 /system/lib/libc.so 12-17 11:13:19.333: INFO/DEBUG(31): 436403d8 afd4270c /system/lib/libc.so 12-17 11:13:19.343: INFO/DEBUG(31): 436403dc afd426b8 /system/lib/libc.so 12-17 11:13:19.343: INFO/DEBUG(31): 436403e0 00000000 12-17 11:13:19.343: INFO/DEBUG(31): 436403e4 afd19375 /system/lib/libc.so 12-17 11:13:19.353: INFO/DEBUG(31): 436403e8 001af760 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403ec afd183d9 /system/lib/libc.so 12-17 11:13:19.353: INFO/DEBUG(31): 436403f0 001b2a48 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403f4 0005ebd2 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403f8 40567330 /dev/ashmem/dalvik-heap (deleted) 12-17 11:13:19.363: INFO/DEBUG(31): 436403fc 81d8592c /system/lib/libdvm.so 12-17 11:13:19.363: INFO/DEBUG(31): 43640400 001b2a48 [heap] 12-17 11:13:19.363: INFO/DEBUG(31): 43640404 afd18437 /system/lib/libc.so 12-17 11:13:19.363: INFO/DEBUG(31): 43640408 df002777 12-17 11:13:19.363: INFO/DEBUG(31): 4364040c e3a070ad 12-17 11:13:19.363: INFO/DEBUG(31): #00 43640410 00000001 12-17 11:13:19.363: INFO/DEBUG(31): 43640414 81d37701 /system/lib/libdvm.so 12-17 11:13:19.363: INFO/DEBUG(31): #01 43640418 00000001 12-17 11:13:19.363: INFO/DEBUG(31): 4364041c 81d399c9 /system/lib/libdvm.so 12-17 11:13:22.753: INFO/BootReceiver(62): Copying /data/tombstones/tombstone_09 to DropBox (SYSTEM_TOMBSTONE) 12-17 11:13:22.943: DEBUG/dalvikvm(62): GC_CONCURRENT freed 876K, 48% free 4240K/8135K, external 2269K/3469K, paused 9ms+10ms 12-17 11:13:23.133: DEBUG/dalvikvm(62): GC_FOR_MALLOC freed 348K, 47% free 4318K/8135K, external 2269K/3469K, paused 147ms 12-17 11:13:23.243: DEBUG/Zygote(33): Process 391 terminated by signal (11) 12-17 11:13:23.253: ERROR/InputDispatcher(62): channel '406defc8 com.phonegap.sample/com.phonegap.sample.sample (server)' ~ Consumer closed input channel or an error occurred. events=0x8 12-17 11:13:23.253: ERROR/InputDispatcher(62): channel '406defc8 com.phonegap.sample/com.phonegap.sample.sample (server)' ~ Channel is unrecoverably broken and will be disposed! 12-17 11:13:23.323: DEBUG/dalvikvm(62): GC_FOR_MALLOC freed 134K, 47% free 4376K/8135K, external 2269K/3469K, paused 174ms 12-17 11:13:23.323: INFO/ActivityManager(62): Process com.phonegap.sample (pid 391) has died. 12-17 11:13:23.333: INFO/WindowManager(62): WIN DEATH: Window{406defc8 com.phonegap.sample/com.phonegap.sample.sample paused=false} 12-17 11:13:23.542: DEBUG/dalvikvm(124): GC_EXPLICIT freed 61K, 51% free 2836K/5767K, external 1973K/2288K, paused 907ms 12-17 11:13:23.693: WARN/InputManagerService(62): Got RemoteException sending setActive(false) notification to pid 391 uid 10031 Sorry about the gigantic log posts, but I don't know what is of importance here...

    Read the article

  • Error "Input length must be multiple of 8 when decrypting with padded cipher"

    - by Ross Peoples
    I am trying to move a project from C# to Java for a learning exercise. I am still very new to Java, but I have a TripleDES class in C# that encrypts strings and returns a string value of the encrypted byte array. Here is my C# code: using System; using System.IO; using System.Collections.Generic; using System.Security.Cryptography; using System.Text; namespace tDocc.Classes { /// <summary> /// Triple DES encryption class /// </summary> public static class TripleDES { private static byte[] key = { 110, 32, 73, 24, 125, 66, 75, 18, 79, 150, 211, 122, 213, 14, 156, 136, 171, 218, 119, 240, 81, 142, 23, 4 }; private static byte[] iv = { 25, 117, 68, 23, 99, 78, 231, 219 }; /// <summary> /// Encrypt a string to an encrypted byte array /// </summary> /// <param name="plainText">Text to encrypt</param> /// <returns>Encrypted byte array</returns> public static byte[] Encrypt(string plainText) { UTF8Encoding utf8encoder = new UTF8Encoding(); byte[] inputInBytes = utf8encoder.GetBytes(plainText); TripleDESCryptoServiceProvider tdesProvider = new TripleDESCryptoServiceProvider(); ICryptoTransform cryptoTransform = tdesProvider.CreateEncryptor(key, iv); MemoryStream encryptedStream = new MemoryStream(); CryptoStream cryptStream = new CryptoStream(encryptedStream, cryptoTransform, CryptoStreamMode.Write); cryptStream.Write(inputInBytes, 0, inputInBytes.Length); cryptStream.FlushFinalBlock(); encryptedStream.Position = 0; byte[] result = new byte[encryptedStream.Length]; encryptedStream.Read(result, 0, (int)encryptedStream.Length); cryptStream.Close(); return result; } /// <summary> /// Decrypt a byte array to a string /// </summary> /// <param name="inputInBytes">Encrypted byte array</param> /// <returns>Decrypted string</returns> public static string Decrypt(byte[] inputInBytes) { UTF8Encoding utf8encoder = new UTF8Encoding(); TripleDESCryptoServiceProvider tdesProvider = new TripleDESCryptoServiceProvider(); ICryptoTransform cryptoTransform = tdesProvider.CreateDecryptor(key, iv); MemoryStream decryptedStream = new MemoryStream(); CryptoStream cryptStream = new CryptoStream(decryptedStream, cryptoTransform, CryptoStreamMode.Write); cryptStream.Write(inputInBytes, 0, inputInBytes.Length); cryptStream.FlushFinalBlock(); decryptedStream.Position = 0; byte[] result = new byte[decryptedStream.Length]; decryptedStream.Read(result, 0, (int)decryptedStream.Length); cryptStream.Close(); UTF8Encoding myutf = new UTF8Encoding(); return myutf.GetString(result); } /// <summary> /// Decrypt an encrypted string /// </summary> /// <param name="text">Encrypted text</param> /// <returns>Decrypted string</returns> public static string DecryptText(string text) { if (text == "") { return text; } return Decrypt(Convert.FromBase64String(text)); } /// <summary> /// Encrypt a string /// </summary> /// <param name="text">Unencrypted text</param> /// <returns>Encrypted string</returns> public static string EncryptText(string text) { if (text == "") { return text; } return Convert.ToBase64String(Encrypt(text)); } } /// <summary> /// Random number generator /// </summary> public static class RandomGenerator { /// <summary> /// Generate random number /// </summary> /// <param name="length">Number of randomizations</param> /// <returns>Random number</returns> public static int GenerateNumber(int length) { byte[] randomSeq = new byte[length]; new RNGCryptoServiceProvider().GetBytes(randomSeq); int code = Environment.TickCount; foreach (byte b in randomSeq) { code += (int)b; } return code; } } /// <summary> /// Hash generator class /// </summary> public static class Hasher { /// <summary> /// Hash type /// </summary> public enum eHashType { /// <summary> /// MD5 hash. Quick but collisions are more likely. This should not be used for anything important /// </summary> MD5 = 0, /// <summary> /// SHA1 hash. Quick and secure. This is a popular method for hashing passwords /// </summary> SHA1 = 1, /// <summary> /// SHA256 hash. Slower than SHA1, but more secure. Used for encryption keys /// </summary> SHA256 = 2, /// <summary> /// SHA348 hash. Even slower than SHA256, but offers more security /// </summary> SHA348 = 3, /// <summary> /// SHA512 hash. Slowest but most secure. Probably overkill for most applications /// </summary> SHA512 = 4, /// <summary> /// Derrived from MD5, but only returns 12 digits /// </summary> Digit12 = 5 } /// <summary> /// Hashes text using a specific hashing method /// </summary> /// <param name="text">Input text</param> /// <param name="hash">Hash method</param> /// <returns>Hashed text</returns> public static string GetHash(string text, eHashType hash) { if (text == "") { return text; } if (hash == eHashType.MD5) { MD5CryptoServiceProvider hasher = new MD5CryptoServiceProvider(); return ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); } else if (hash == eHashType.SHA1) { SHA1Managed hasher = new SHA1Managed(); return ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); } else if (hash == eHashType.SHA256) { SHA256Managed hasher = new SHA256Managed(); return ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); } else if (hash == eHashType.SHA348) { SHA384Managed hasher = new SHA384Managed(); return ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); } else if (hash == eHashType.SHA512) { SHA512Managed hasher = new SHA512Managed(); return ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); } else if (hash == eHashType.Digit12) { MD5CryptoServiceProvider hasher = new MD5CryptoServiceProvider(); string newHash = ByteToHex(hasher.ComputeHash(Encoding.ASCII.GetBytes(text))); return newHash.Substring(0, 12); } return ""; } /// <summary> /// Generates a hash based on a file's contents. Used for detecting changes to a file and testing for duplicate files /// </summary> /// <param name="info">FileInfo object for the file to be hashed</param> /// <param name="hash">Hash method</param> /// <returns>Hash string representing the contents of the file</returns> public static string GetHash(FileInfo info, eHashType hash) { FileStream hashStream = new FileStream(info.FullName, FileMode.Open, FileAccess.Read); string hashString = ""; if (hash == eHashType.MD5) { MD5CryptoServiceProvider hasher = new MD5CryptoServiceProvider(); hashString = ByteToHex(hasher.ComputeHash(hashStream)); } else if (hash == eHashType.SHA1) { SHA1Managed hasher = new SHA1Managed(); hashString = ByteToHex(hasher.ComputeHash(hashStream)); } else if (hash == eHashType.SHA256) { SHA256Managed hasher = new SHA256Managed(); hashString = ByteToHex(hasher.ComputeHash(hashStream)); } else if (hash == eHashType.SHA348) { SHA384Managed hasher = new SHA384Managed(); hashString = ByteToHex(hasher.ComputeHash(hashStream)); } else if (hash == eHashType.SHA512) { SHA512Managed hasher = new SHA512Managed(); hashString = ByteToHex(hasher.ComputeHash(hashStream)); } hashStream.Close(); hashStream.Dispose(); hashStream = null; return hashString; } /// <summary> /// Converts a byte array to a hex string /// </summary> /// <param name="data">Byte array</param> /// <returns>Hex string</returns> public static string ByteToHex(byte[] data) { StringBuilder builder = new StringBuilder(); foreach (byte hashByte in data) { builder.Append(string.Format("{0:X1}", hashByte)); } return builder.ToString(); } /// <summary> /// Converts a hex string to a byte array /// </summary> /// <param name="hexString">Hex string</param> /// <returns>Byte array</returns> public static byte[] HexToByte(string hexString) { byte[] returnBytes = new byte[hexString.Length / 2]; for (int i = 0; i <= returnBytes.Length - 1; i++) { returnBytes[i] = byte.Parse(hexString.Substring(i * 2, 2), System.Globalization.NumberStyles.HexNumber); } return returnBytes; } } } And her is what I've got for Java code so far, but I'm getting the error "Input length must be multiple of 8 when decrypting with padded cipher" when I run the test on this: import java.security.InvalidAlgorithmParameterException; import java.security.InvalidKeyException; import javax.crypto.Cipher; import javax.crypto.NoSuchPaddingException; import javax.crypto.SecretKey; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import com.tdocc.utils.Base64; public class TripleDES { private static byte[] keyBytes = { 110, 32, 73, 24, 125, 66, 75, 18, 79, (byte)150, (byte)211, 122, (byte)213, 14, (byte)156, (byte)136, (byte)171, (byte)218, 119, (byte)240, 81, (byte)142, 23, 4 }; private static byte[] ivBytes = { 25, 117, 68, 23, 99, 78, (byte)231, (byte)219 }; public static String encryptText(String plainText) { try { if (plainText.isEmpty()) return plainText; return Base64.decode(TripleDES.encrypt(plainText)).toString(); } catch (Exception e) { e.printStackTrace(); } return null; } public static byte[] encrypt(String plainText) throws InvalidKeyException, InvalidAlgorithmParameterException, NoSuchPaddingException { try { final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(ivBytes); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = plainText.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); return cipherText; } catch (Exception e) { e.printStackTrace(); } return null; } public static String decryptText(String message) { try { if (message.isEmpty()) return message; else return TripleDES.decrypt(message.getBytes()); } catch (Exception e) { e.printStackTrace(); } return null; } public static String decrypt(byte[] message) { try { final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(ivBytes); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.DECRYPT_MODE, key, iv); final byte[] plainText = cipher.doFinal(message); return plainText.toString(); } catch (Exception e) { e.printStackTrace(); } return null; } }

    Read the article

  • Updating the value of a math equation with YUI slider and simple radio buttons.

    - by dj lewis
    I have a form that is used to show a price for a product. I have a YUI slider setup that changes the price, and it works perfectly. Now I'm trying to add in radio buttons that also should update that same price value. The price displayed should take into account all 3 fields, and update dynamically as any are updated. This is the code I have, but I don't have any radio buttons for cpanelPrice yet as I'm still just trying to get the IPs to work. <script type="text/javascript"> (function() { var Event = YAHOO.util.Event, Dom = YAHOO.util.Dom, lang = YAHOO.lang, slider, bg="slider-bg", thumb="slider-thumb", orderlink="order-link", monthlyprice="monthly-price", dram="ram", stor="storage",dcpu="cpu",bandw="bandwidth",slid="sliderbg" // The slider can move 0 pixels up var topConstraint = 0; // The slider can move 200 pixels down var bottomConstraint = 585; // Custom scale factor for converting the pixel offset into a real value var scaleFactor = 1; // The amount the slider moves when the value is changed with the arrow // keys var keyIncrement = 65; var tickSize = 65; Event.onDOMReady(function() { slider = YAHOO.widget.Slider.getHorizSlider(bg, thumb, topConstraint, bottomConstraint, tickSize); slider.setValue(1, true); slider.animate = true; slider.getRealValue = function() { return Math.round(this.getValue() * scaleFactor); } slider.subscribe("change", function(offsetFromStart) { var ordnode = Dom.get(orderlink); var prinode = Dom.get(monthlyprice); var ramnode = Dom.get(dram); var stornode = Dom.get(stor); var cpunode = Dom.get(dcpu); var bwnode = Dom.get(bandw); var slidnode = Dom.get(slid); var actualValue = slider.getRealValue(); if (actualValue < 0) { var actualValue = 0; } if (actualValue > -1 && actualValue < 5) { basePrice = 15; var pid = "7"; var ram = "128 MB"; stornode.innerHTML = "5"; cpunode.innerHTML = ".5"; bwnode.innerHTML = "50"; slidnode.innerHTML = "<img src=\"/images/sliderbg1.png\" alt=\"\" />"; } else if (actualValue > 60 && actualValue < 70) { basePrice = 25; var pid = "8"; var ram = "256 MB"; stornode.innerHTML = "10"; cpunode.innerHTML = ".5"; bwnode.innerHTML = "100"; slidnode.innerHTML = "<img src=\"/images/sliderbg2.png\" alt=\"\" />"; } else if (actualValue > 125 && actualValue < 135) { basePrice = 40; var pid = "9"; var ram = "512 MB"; stornode.innerHTML = "20"; cpunode.innerHTML = "1"; bwnode.innerHTML = "200"; slidnode.innerHTML = "<img src=\"/images/sliderbg3.png\" alt=\"\" />"; } else if (actualValue > 190 && actualValue < 200) { basePrice = 60; var pid = "10"; var ram = "1 GB"; stornode.innerHTML = "40"; cpunode.innerHTML = "1"; bwnode.innerHTML = "400"; slidnode.innerHTML = "<img src=\"/images/sliderbg4.png\" alt=\"\" />"; } else if (actualValue> 255 && actualValue < 265) { basePrice = 80; var pid = "11"; var ram = "1.5 GB"; stornode.innerHTML = "60"; cpunode.innerHTML = "1"; bwnode.innerHTML = "600"; slidnode.innerHTML = "<img src=\"/images/sliderbg5.png\" alt=\"\" />"; } else if (actualValue > 320 && actualValue < 330) { basePrice = 110; var pid = "12"; var ram = "2 GB"; stornode.innerHTML = "80"; cpunode.innerHTML = "2"; bwnode.innerHTML = "800"; slidnode.innerHTML = "<img src=\"/images/sliderbg6.png\" alt=\"\" />"; } else if (actualValue > 385 && actualValue < 395) { basePrice = 140; var pid = "13"; var ram = "2.5 GB"; stornode.innerHTML = "100"; cpunode.innerHTML = "2"; bwnode.innerHTML = "1000"; slidnode.innerHTML = "<img src=\"/images/sliderbg7.png\" alt=\"\" />"; } else if (actualValue > 450 && actualValue < 460) { basePrice = 170; var pid = "14"; var ram = "3 GB"; stornode.innerHTML = "120"; cpunode.innerHTML = "3"; bwnode.innerHTML = "1200"; slidnode.innerHTML = "<img src=\"/images/sliderbg8.png\" alt=\"\" />"; } else if (actualValue > 515 && actualValue < 525) { basePrice = 200; var pid = "15"; var ram = "3.5 GB"; stornode.innerHTML = "140"; cpunode.innerHTML = "3"; bwnode.innerHTML = "1400"; slidnode.innerHTML = "<img src=\"/images/sliderbg9.png\" alt=\"\" />"; } else if (actualValue > 580 && actualValue < 590) { basePrice = 240; var pid = "16"; var ram = "4 GB"; stornode.innerHTML = "160"; cpunode.innerHTML = "4"; bwnode.innerHTML = "1600"; slidnode.innerHTML = "<img src=\"/images/sliderbg10.png\" alt=\"\" />"; } // Setup the order link ordnode.innerHTML = "<a href=\"https://account.hostingbeast.com/cart.php?a=add&pid=" + pid + "\"><img src=\"/images/blank.gif\" alt=\"Order VPS Hosting\" height=\"100\" width=\"100\" /></a>"; ramnode.innerHTML = ram; ipPrice = 0; function setIpPrice(ips) { ipPrice = ips.value; } cpanelPrice = 0; prinode.innerHTML = basePrice + ipPrice + cpanelPrice; }); // Use setValue to reset the value to white: Event.on("putval", "click", function(e) { slider.setValue(100, false); //false here means to animate if possible }); setTimeout(function () { slider.setValue(10); },0); }); })(); </script> <div style="width: 649px; margin:auto"> <span id="sliderbg"></span> <div class="yui-skin-sam"> <div id="slider-bg" class="yui-h-slider" tabindex="-1"> <div id="slider-thumb" class="yui-slider-thumb"><img src="/images/thumb-bar.png"></div> </div> </div> </div> <div class="vpsdetails"> <div id="vpsprod"><span id="cpu"></span></div> <div id="vpsram"><span id="ram"></span></div> <div id="vpsstor"><span id="storage"></span> GB</div> <div id="vpsbw"><span id="bandwidth"></span> GB</div> <div id="slideprice">$ <span id="monthly-price"></span></div> </div> <input type="radio" name="ips" value="2" onclick="setIpPrice(this.value - 2 * 2);" checked="checked" /> 2 <input type="radio" name="ips" value="4" onclick="setIpPrice(this.value - 2 * 2);" /> 4

    Read the article

< Previous Page | 210 211 212 213 214 215  | Next Page >