Search Results

Search found 443 results on 18 pages for 'validates uniqueness of'.

Page 17/18 | < Previous Page | 13 14 15 16 17 18  | Next Page >

  • Validating a linked item&rsquo;s data template in Sitecore

    - by Kyle Burns
    I’ve been doing quite a bit of work in Sitecore recently and last week I encountered a situation that it appears many others have hit.  I was working with a field that had been configured originally as a grouped droplink, but now needed to be updated to support additional levels of hierarchy in the folder structure.  If you’ve done any work in Sitecore that statement makes sense, but if not it may seem a bit cryptic.  Sitecore offers a number of different field types and a subset of these field types focus on providing links either to other items on the content tree or to content that is not stored in Sitecore.  In the case of the grouped droplink, the field is configured with a “root” folder and each direct descendant of this folder is considered to be a header for a grouping of other items and displayed in a dropdown.  A picture is worth a thousand words, so consider the following piece of a content tree: If I configure a grouped droplink field to use the “Current” folder as its datasource, the control that gets to my content author looks like this: This presents a nicely organized display and limits the user to selecting only the direct grandchildren of the folder root.  It also presents the limitation that struck as we were thinking through the content architecture and how it would hold up over time – the authors cannot further organize content under the root folder because of the structure required for the dropdown to work.  Over time, not allowing the hierarchy to go any deeper would prevent out authors from being able to organize their content in a way that it would be found when needed, so the grouped droplink data type was not going to fit the bill. I needed to look for an alternative data type that allowed for selection of a single item and limited my choices to descendants of a specific node on the content tree.  After looking at the options available for links in Sitecore and considering them against each other, one option stood out as nearly perfect – the droptree.  This field type stores its data identically to the droplink and allows for the selection of zero or one items under a specific node in the content tree.  By changing my data template to use droptree instead of grouped droplink, the author is now presented with the following when selecting a linked item: Sounds great, but a did say almost perfect – there’s still one flaw.  The code intended to display the linked item is expecting the selection to use a specific data template (or more precisely it makes certain assumptions about the fields that will be present), but the droptree does nothing to prevent the author from selecting a folder (since folders are items too) instead of one of the items contained within a folder.  I looked to see if anyone had already solved this problem.  I found many people discussing the problem, but the closest that I found to a solution was the statement “the best thing would probably be to create a custom validator” with no further discussion in regards to what this validator might look like.  I needed to create my own validator to ensure that the user had not selected a folder.  Since so many people had the same issue, I decided to make the validator as reusable as possible and share it here. The validator that I created inherits from StandardValidator.  In order to make the validator more intuitive to developers that are familiar with the TreeList controls in Sitecore, I chose to implement the following parameters: ExcludeTemplatesForSelection – serves as a “deny list”.  If the data template of the selected item is in this list it will not validate IncludeTemplatesForSelection – this can either be empty to indicate that any template not contained in the exclusion list is acceptable or it can contain the list of acceptable templates Now that I’ve explained the parameters and the purpose of the validator, I’ll let the code do the rest of the talking: 1: /// <summary> 2: /// Validates that a link field value meets template requirements 3: /// specified using the following parameters: 4: /// - ExcludeTemplatesForSelection: If present, the item being 5: /// based on an excluded template will cause validation to fail. 6: /// - IncludeTemplatesForSelection: If present, the item not being 7: /// based on an included template will cause validation to fail 8: /// 9: /// ExcludeTemplatesForSelection trumps IncludeTemplatesForSelection 10: /// if the same value appears in both lists. Lists are comma seperated 11: /// </summary> 12: [Serializable] 13: public class LinkItemTemplateValidator : StandardValidator 14: { 15: public LinkItemTemplateValidator() 16: { 17: } 18:   19: /// <summary> 20: /// Serialization constructor is required by the runtime 21: /// </summary> 22: /// <param name="info"></param> 23: /// <param name="context"></param> 24: public LinkItemTemplateValidator(SerializationInfo info, StreamingContext context) : base(info, context) { } 25:   26: /// <summary> 27: /// Returns whether the linked item meets the template 28: /// constraints specified in the parameters 29: /// </summary> 30: /// <returns> 31: /// The result of the evaluation. 32: /// </returns> 33: protected override ValidatorResult Evaluate() 34: { 35: if (string.IsNullOrWhiteSpace(ControlValidationValue)) 36: { 37: return ValidatorResult.Valid; // let "required" validation handle 38: } 39:   40: var excludeString = Parameters["ExcludeTemplatesForSelection"]; 41: var includeString = Parameters["IncludeTemplatesForSelection"]; 42: if (string.IsNullOrWhiteSpace(excludeString) && string.IsNullOrWhiteSpace(includeString)) 43: { 44: return ValidatorResult.Valid; // "allow anything" if no params 45: } 46:   47: Guid linkedItemGuid; 48: if (!Guid.TryParse(ControlValidationValue, out linkedItemGuid)) 49: { 50: return ValidatorResult.Valid; // probably put validator on wrong field 51: } 52:   53: var item = GetItem(); 54: var linkedItem = item.Database.GetItem(new ID(linkedItemGuid)); 55:   56: if (linkedItem == null) 57: { 58: return ValidatorResult.Valid; // this validator isn't for broken links 59: } 60:   61: var exclusionList = (excludeString ?? string.Empty).Split(','); 62: var inclusionList = (includeString ?? string.Empty).Split(','); 63:   64: if ((inclusionList.Length == 0 || inclusionList.Contains(linkedItem.TemplateName)) 65: && !exclusionList.Contains(linkedItem.TemplateName)) 66: { 67: return ValidatorResult.Valid; 68: } 69:   70: Text = GetText("The field \"{0}\" specifies an item which is based on template \"{1}\". This template is not valid for selection", GetFieldDisplayName(), linkedItem.TemplateName); 71:   72: return GetFailedResult(ValidatorResult.FatalError); 73: } 74:   75: protected override ValidatorResult GetMaxValidatorResult() 76: { 77: return ValidatorResult.FatalError; 78: } 79:   80: public override string Name 81: { 82: get { return @"LinkItemTemplateValidator"; } 83: } 84: }   In this blog entry, I have shared some code that I found useful in solving a problem that seemed fairly common.  Hopefully the next person that is looking for this answer finds it useful as well.

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • New Validation Attributes in ASP.NET MVC 3 Future

    - by imran_ku07
         Introduction:             Validating user inputs is an very important step in collecting information from users because it helps you to prevent errors during processing data. Incomplete or improperly formatted user inputs will create lot of problems for your application. Fortunately, ASP.NET MVC 3 makes it very easy to validate most common input validations. ASP.NET MVC 3 includes Required, StringLength, Range, RegularExpression, Compare and Remote validation attributes for common input validation scenarios. These validation attributes validates most of your user inputs but still validation for Email, File Extension, Credit Card, URL, etc are missing. Fortunately, some of these validation attributes are available in ASP.NET MVC 3 Future. In this article, I will show you how to leverage Email, Url, CreditCard and FileExtensions validation attributes(which are available in ASP.NET MVC 3 Future) in ASP.NET MVC 3 application.       Description:             First of all you need to download ASP.NET MVC 3 RTM Source Code from here. Then extract all files in a folder. Then open MvcFutures project from mvc3-rtm-sources\mvc3\src\MvcFutures folder. Build the project. In case, if you get compile time error(s) then simply remove the reference of System.Web.WebPages and System.Web.Mvc assemblies and add the reference of System.Web.WebPages and System.Web.Mvc 3 assemblies again but from the .NET tab and then build the project again, it will create a Microsoft.Web.Mvc assembly inside mvc3-rtm-sources\mvc3\src\MvcFutures\obj\Debug folder. Now we can use Microsoft.Web.Mvc assembly inside our application.             Create a new ASP.NET MVC 3 application. For demonstration purpose, I will create a dummy model UserInformation. So create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { [Required] public string Name { get; set; } [Required] [EmailAddress] public string Email { get; set; } [Required] [Url] public string Website { get; set; } [Required] [CreditCard] public string CreditCard { get; set; } [Required] [FileExtensions(Extensions = "jpg,jpeg")] public string Image { get; set; } }             Inside UserInformation class, I am using Email, Url, CreditCard and FileExtensions validation attributes which are defined in Microsoft.Web.Mvc assembly. By default FileExtensionsAttribute allows png, jpg, jpeg and gif extensions. You can override this by using Extensions property of FileExtensionsAttribute class.             Then just open(or create) HomeController.cs file and add the following code,   public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(UserInformation u) { return View(); } }             Next just open(or create) Index view for Home controller and add the following code,  @model NewValidationAttributesinASPNETMVC3Future.Model.UserInformation @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>UserInformation</legend> <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Website) </div> <div class="editor-field"> @Html.EditorFor(model => model.Website) @Html.ValidationMessageFor(model => model.Website) </div> <div class="editor-label"> @Html.LabelFor(model => model.CreditCard) </div> <div class="editor-field"> @Html.EditorFor(model => model.CreditCard) @Html.ValidationMessageFor(model => model.CreditCard) </div> <div class="editor-label"> @Html.LabelFor(model => model.Image) </div> <div class="editor-field"> @Html.EditorFor(model => model.Image) @Html.ValidationMessageFor(model => model.Image) </div> <p> <input type="submit" value="Save" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div>             Now just run your application. You will find that both client side and server side validation for the above validation attributes works smoothly.                      Summary:             Email, URL, Credit Card and File Extension input validations are very common. In this article, I showed you how you can validate these input validations into your application. I explained this with an example. I am also attaching a sample application which also includes Microsoft.Web.Mvc.dll. So you can add a reference of Microsoft.Web.Mvc assembly directly instead of doing any manual work. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • B2B and B2C alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester Research, Inc.’s first Commerce Wave focused on B2B, “The Forrester Wave™: B2B Commerce Suites, Q4 2013,” released earlier this month. We believe that the report validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, we feel that the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: front-office content, community, and commerce features that meet customer expectations for 24x7x365 ordering, real-time customer service, and expedited shipping — both online and on mobile devices: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). It seems that this market momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management. Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery. We would like to expand on each of these 3 areas: As Forrester suggests, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.  On the topic of web content management, we were pleased to see Forrester cite Oracle’s differentiated digital experience capability in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one. So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle certainly has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we'll address in an exciting new release planned for the next 12 months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP. To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: -       2013 B2B Commerce Trends Report -       B2B Commerce Whitepaper: Consumerization, Complexity, Change -       B2B Commerce Webcast: What Industry Trend Setters Do Right -       Internet Retailer, Web Drives Sales for B2B Companies -       Internet Retailer Article, The Web Means Business: B2B Companies Beef Up Their Websites,        borrowing from b2c retailers and breaking new ground -       Internet Retailer Article, B2B e-Commerce is poised for growth

    Read the article

  • Feedback Filtration&ndash;Processing Negative Comments for Positive Gains

    - by D'Arcy Lussier
    After doing 7 conferences, 5 code camps, and countless user group events, I feel that this is a post I need to write. I actually toyed with other names for this post, however those names would just lend itself to the type of behaviour I want people to avoid – the reactionary, emotional response that speaks to some deeper issue beyond immediate facts and context. Humans are incredibly complex creatures. We’re also emotional, which serves us well in certain situations but can hinder us in others. Those of us in leadership build up a thick skin because we tend to encounter those reactionary, emotional responses more often, and we’re held to a higher standard because of our positions. While we could react with emotion ourselves, as the saying goes – fighting fire with fire just makes a bigger fire. So in this post I’ll share my thought process for dealing with negative feedback/comments and how you can still get value from them. The Thought Process Let’s take a real-world example. This week I held the Prairie IT Pro & Dev Con event. We’ve gotten a lot of session feedback already, most of it overwhelmingly positive. But some not so much – and some to an extreme I rarely see but isn’t entirely surprising to me. So here’s the example from a person we’ll refer to as Mr. Horrible: How was the speaker? Horrible! Worst speaker ever! Did the session meet your expectations? Hard to tell, speaker ruined it. Other Comments: DO NOT bring this speaker back! He was at this conference last year and I hoped enough negative feedback would have taught you to not bring him back...obviously not...I will not return to this conference next year if this speaker is brought back. Now those are very strong words. “Worst speaker ever!” “Speaker ruined it” “I will not return to this conference next year if the speaker is brought back”. The speakers I invite to speak at my conference are not just presenters but friends and colleagues. When I see this, my initial reaction is of course very emotional: I get defensive, I get angry, I get offended. So that’s where the process kicks in. Step 1 – Take a Deep Breath Take a deep breath, calm down, and walk away from the keyboard. I didn’t do that recently during an email convo between some colleagues and it ended up in my reacting emotionally on Twitter – did I mention those colleagues follow my Twitter feed? Yes, I ate some crow. Ok, now that we’re calm, let’s move on to step 2. Step 2 – Strip off the Emotion We need to take off the emotion that people wrap their words in and identify the root issues. For instance, if I see: “I hated this session, the presenter was horrible! He spoke so fast I couldn’t make out what he was saying!” then I drop off the personal emoting (“I hated…”) and the personal attack (“the presenter was horrible”) and focus on the real issue this person had – that the speaker was talking too fast. Now we have a root cause of the displeasure. However, we’re also dealing with humans who are all very different. Before I call up the speaker to talk about his speaking pace, I need to do some other things first. Back to our Mr. Horrible example, I don’t really have much to go on. There’s no details of how the speaker “ruined” the session or why he’s the “worst speaker ever”. In this case, the next step is crucial. Step 3 – Validate the Feedback When I tell people that we really like getting feedback for the sessions, I really really mean it. Not just because we want to hear what individuals have to say but also because we want to know what the group thought. When a piece of negative feedback comes in, I validate it against the group. So with the speaker Mr. Horrible commented on, I go to the feedback and look at other people’s responses: 2 x Excellent 1 x Alright 1 x Not Great 1 x Horrible (our feedback guy) That’s interesting, it’s a bit all over the board. If we look at the comments more we find that the people who rated the speaker excellent liked the presentation style and found the content valuable. The one guy who said “Not Great” even commented that there wasn’t anything really wrong with the presentation, he just wasn’t excited about it. In that light, I can try to make a few assumptions: - Mr. Horrible didn’t like the speakers presentation style - Mr. Horrible was expecting something else that wasn’t communicated properly in the session description - Mr. Horrible, for whatever reason, just didn’t like this presenter Now if the feedback was overwhelmingly negative, there’s a different pattern – one that validates the negative feedback. Regardless, I never take something at face value. Even if I see really good feedback, I never get too happy until I see that there’s a group trend towards the positive. Step 4 – Action Plan Once I’ve validated the feedback, then I need to come up with an action plan around it. Let’s go back to the other example I gave – the one with the speaker going too fast. I went and looked at the feedback and sure enough, other people commented that the speaker had spoken too quickly. Now I can go back to the speaker and let him know so he can get better. But what if nobody else complained about it? I’d still mention it to the speaker, but obviously one person’s opinion needs to be weighed as such. When we did PrDC Winnipeg in 2011, I surveyed the attendees about the food. Everyone raved about it…except one person. Am I going to change the menu next time for that one person while everyone else loved it? Of course not. There’s a saying – A sure way to fail is to try to please everyone. Let’s look at the Mr. Horrible example. What can I communicate to the speaker with such limited information provided in the feedback from Mr. Horrible? Well looking at the groups feedback, I can make a few suggestions: - Ensure that people understand in the session description the style of the talk - Ensure that people understand the level of detail/complexity of the talk and what prerequisite knowledge they should have I’m looking at it as possibly Mr. Horrible assumed a much more advanced talk and was disappointed, while the positive feedback by people who – from their comments – suggested this was all new to them, were thrilled with the session level. Step 5 – Follow Up For some feedback, I follow up personally. Especially with negative or constructive feedback, its important to let the person know you heard them and are making changes because of their comments. Even if their comments were emotionally charged and overtly negative, it’s still important to reach out personally and professionally. When you remove the emotion, negative comments can be the best feedback you get. Also, people have bad days. We’ve all had one of “those days” where we talked more sternly than normal to someone, or got angry at something we’d normally shrug off. We have various stresses in our lives and sometimes they seep out in odd ways. I always try to give some benefit of the doubt, and re-evaluate my view of the person after they’ve responded to my communication. But, there is such a thing as garbage feedback. What Mr. Horrible wrote is garbage. It’s mean spirited. It’s hateful. It provides nothing constructive at all. And a tell-tale sign that feedback is garbage – the person didn’t leave their name even though there was a field for it. Step 6 – Delete It Feedback must be processed in its raw form, and the end products should drive improvements. But once you’ve figured out what those things are, you shouldn’t leave raw feedback lying around. They are snapshots in time that taken alone can be damaging. Also, you should never rest on past praise. In a future blog post, I’m going to talk about how we can provide great feedback that, even when its critical, can still be constructive.

    Read the article

  • CodePlex Daily Summary for Monday, May 31, 2010

    CodePlex Daily Summary for Monday, May 31, 2010New ProjectsAndrew's XNA Helpers: A collection of simple, yet useful methods and ways of accessing crucial variables such as the ContentManager or SpriteBatch from anywhere in your ...BASIC-DOS: BASIC-DOS OS Makes It Easier For People To Use DOS As It Comes With An Graphical User Interface That Loads Up During Boot Up. No Need To Type Any C...Chirpy - Visual Studio Add In For Handling Js, Css, and DotLess Files: Mashes, minifies, and validates your javascript, stylesheet, and dotless files.fprparser: Fortify XML Report parser in the form of an Excel Add-inHL7ToXmlConverter: Class library to transform HL7 Version 2.x to HL7Xml Version 2 depends on the used HL7 grammar.imdb movie downloader: basically download info from imdb and it s realy FAST ! need some development about thread pooling and webclient issues.. just try ;) for it run ...IMIfmoOptimisation: Задание по оптимизацииMarketView: MarketViewMediaStreamSources: 这是 CodePlex 上第一个可以呈现视频的 MediaStreamSource 项目。Migrate User Profile Values: A tool to move values between SharePoint 2007 User Profiles. The console application MigrateUserProfileValues.exe will export User Profile values ...Nexus6Studio Development Space: This is a working repository for development efforst of the Nexus6studio team.Project BlueLabel: BlueLabelSergioTools: SergioTools is a collection of tools and sample codes to help C# developers to improve your productivity and skills.SharePoint Property Bag Settings 2010: The Property Bag Settings can store any metadata as Key-Value pairs such as connection strings, server names, file paths, and other miscellaneous s...Silverlight Isolated Storage Cache: IsoCache is a small framework the make it easy to store dll's and xap files in the isolated storage so they can be use to speed up the startup of t...StackPivot: StackPivot is an app which can generate Microsoft Pivot "Collections" on-the-fly based on the data collected from the Stack Exchange APIs. Its d...Suspension Calculator: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...TFS Timesheets: A custom work item control for Team Foundation Server that allows timesheet data to be captured against a work item.Umbraco Membership infrastructure: Improvements to the Umbraco membership API implemented as a package. Includes user properties and improved membership API support.VolgaTransTelecomClient: VolgaTransTelecomClient makes it easier for clients of "Volga TransTelecom" company to get info about account. It's developed in C#.Wouter's SharePoint Demo Land: This site contains many of the SharePoint demos that I create while training or hobbying, both for the 2007 and the 2010 release. Please click the...New ReleasesAgUnit - Silverlight unit testing with ReSharper: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1-ReSharper5.0.zip into the "Bin\Plugins\" folder of your ReSharper installatio...Andrew's XNA Helpers: Andrew's XNA Helpers v1: My first version of my Library of XNA Helpers. Includes: Variables class - A static class that can be accessed anywhere throughout your project -...Clean your Database: Database Cleaner Setup: Database Cleaner SetupClean your Database: Database Cleaner Source: Database Cleaner Source codeCommunity Forums NNTP bridge: Community Forums NNTP Bridge V16: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V17: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release is prim...Folder Bookmarks: Folder Bookmarks 1.5.8: The latest version of Folder Bookmarks (1.5.8), with new GUI improvements and 'Help' feature - all the instructions needed to use the software (If ...HL7ToXmlConverter: HL7ToXmlConverter Version 0.0.0.9: First distribution on codepleximdb movie downloader: 0.9 Fist Tryout of MyImdb SourceCode: Fist Tryout of MyImdb SourceCodeLightweight Fluent Workflow: Objectflow 1.0.0.2: The features of this release take advantage of .Net 3.5 featrures; Lamda support is back for Constraints and Functions can now be used in workflows...MDownloader: MDownloader-0.15.16.59384: Fixed password detector in context of .rar files. Fixed FileFactory provider implementation. Fixed detection of internet connection failures.MediaStreamSources: MediaStreamSources 1.0 Beta1: 1.0 Beta1Mongodb Management Studio: Mongodb Management Studio v1.1: MongodbManagementStudio v1.1 1.服务器管理功能 添加服务器,删除服务器 2.服务器,数据库,表,列,索引,树形显示和状态信息查看 3.查询分析器功能. 支持select,insert,Delete,update 支持自定义分页函数 $rowid(1,5) 查询...Multiplayer Quiz: Release 1_7_1_0: Latest Release. .NET 4.0 required to run server, and recommended for client. Download: http://www.microsoft.com/downloads/details.aspx?FamilyID=9cf...SCSM PowerShell Cmdlets: SCSM PowerShell Cmdlets Version 0.1.1: First release! This is a minimal build with limited funcitonallity. Should be handled as a preview of what's to come. Included Cmdlets are: New-...SharePoint Property Bag Settings 2010: PropertyBagSettings2010.wsp: The SharePoint Property Bag Settings is a reusable component that you can include in your own SharePoint applications. It can store simple types, s...Sharp Tests Ex: Sharp Tests Ex 1.0.0RTM: Project Description #TestsEx (Sharp Tests Extensions) is a set of extensible extensions. The main target is write short assertions where the Visual...Silverlight Testing Automation Tool: StatLight V1.1: FeaturesApplied some UnitDriven specific changes from justncase80/StatLight Updated to the 0.0.5 release over on rul:UnitDriven.codeplex.com Upda...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 30 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...Suspension Calculator: SuspensionCalculator_V1.0.0.29: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...Svn2Svn: copy, sync, replay or reflect changes across SVN repositories: 1.2 (Beta): Build 1.2.8932.0. Added /incremental (/i) mode, svn2svn detects all previously synced revisions and starts at the latest revision that has not bee...VCC: Latest build, v2.1.30530.0: Automatic drop of latest buildVolgaTransTelecomClient: V.1.0.2.0: v.1.0.2.0 releaseWatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.01: changes CSS Fixed for the Microsoft Internet ExplorerWouter's SharePoint Demo Land: Navigation Service Basic: This sample shows how to create a simple Service Application for SharePoint 2010. You can read up on the how and why on my blog series about Serv...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsCommunity Forums NNTP bridgeAStar.netpatterns & practices – Enterprise LibraryBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationMirror Testing SystemIonics Isapi Rewrite FilterCustomer Portal Accelerator for Microsoft Dynamics CRMN2 CMSpatterns & practices: Windows Azure Security Guidance

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by jatin.thaker
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; John Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and John Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. John Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patanjali Venkatacharya enjoyed the Sardinian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by Applications User Experience
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; Jonathan Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and Jonathan Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. Jonathan Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patnajali Venkatacharya enjoyed the Sardian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Lenovo V570 CPU fan running constantly, CPU core 1 running over 90%!

    - by Rabbit2190
    I have seen that a lot of people are having this same issue. I am running a Lenovo V570 i5 4 core, 6 gigs of ram, and am running 11.10 Onieric Ocelot. On my system monitor graph it shows CPU at 20%, when I open the monitor it shows core #1 at around 90%, the other cores fluctuate at or below 5-12% if even. Now this seems like a really terrible balance of power between the cores, especially with so much stress on one core only, when these things are designed to work with 4 cores and not at such high temps. My current readings say 64 degrees Celsius, this does not seem normal for any cpu, and I am seriously considering, working on my windows7 partition until I see a real solution to this issue or upgrading to 12.04 right away when it comes out... I have seen countless things saying it has something to do with the Kernel, the kernel on mine is the same as when I upgraded, I really do not like messing with it, as when I had 11.04, I did tinker with it due to the freeze issues I was having, and that just made worse issues. I like this version 11.10 and would like to keep it for a while, but without the fear that my core is going to fry! So any help would be much appreciated! I did try changing a couple things in ACPI, and restarting this did not help, and here I am. I tried one thing prior to that that was listed under a different computer brand, but it would not do a make on the file. I really need help with this, I rely on this computer for a lot of things, and love this OS! Please help so I do not need to resort to my Microsoft partition! PLEASE! Here is the fwts cpufrequ- output: rabbit@rabbit-Lenovo-V570:~$ sudo fwts cpufreq - 00001 fwts Results generated by fwts: Version V0.23.25 (Thu Oct 6 15 00002 fwts :12:31 BST 2011). 00003 fwts 00004 fwts Some of this work - Copyright (c) 1999 - 2010, Intel Corp. 00005 fwts All rights reserved. 00006 fwts Some of this work - Copyright (c) 2010 - 2011, Canonical. 00007 fwts 00008 fwts This test run on 02/04/12 at 17:23:22 on host Linux 00009 fwts rabbit-Lenovo-V570 3.0.0-17-generic-pae #30-Ubuntu SMP Thu 00010 fwts Mar 8 17:53:35 UTC 2012 i686. 00011 fwts 00012 fwts Running tests: cpufreq. 00014 cpufreq CPU frequency scaling tests (takes ~1-2 mins). 00015 cpufreq --------------------------------------------------------- 00016 cpufreq Test 1 of 1: CPU P-State Checks. 00017 cpufreq For each processor in the system, this test steps through 00018 cpufreq the various frequency states (P-states) that the BIOS 00019 cpufreq advertises for the processor. For each processor/frequency 00020 cpufreq combination, a quick performance value is measured. The 00021 cpufreq test then validates that: 00022 cpufreq 1) Each processor has the same number of frequency states 00023 cpufreq 2) Higher advertised frequencies have a higher performance 00024 cpufreq 3) No duplicate frequency values are reported by the BIOS 00025 cpufreq 4) Is BIOS wrongly doing Sw_All P-state coordination across cores 00026 cpufreq 5) Is BIOS wrongly doing Sw_Any P-state coordination across cores 00027 cpufreq Frequency | Speed 00028 cpufreq -----------+--------- 00029 cpufreq 2.45 Ghz | 100.0 % 00030 cpufreq 2.45 Ghz | 83.7 % 00031 cpufreq 2.05 Ghz | 69.2 % 00032 cpufreq 1.85 Ghz | 62.5 % 00033 cpufreq 1.65 Ghz | 55.2 % 00034 cpufreq 1400 Mhz | 48.6 % 00035 cpufreq 1200 Mhz | 41.8 % 00036 cpufreq 1000 Mhz | 34.5 % 00037 cpufreq 800 Mhz | 27.6 % 00038 cpufreq 9 CPU frequency steps supported 00039 cpufreq Frequency | Speed 00040 cpufreq -----------+--------- 00041 cpufreq 2.45 Ghz | 97.7 % 00042 cpufreq 2.45 Ghz | 83.7 % 00043 cpufreq 2.05 Ghz | 69.6 % 00044 cpufreq 1.85 Ghz | 63.3 % 00045 cpufreq 1.65 Ghz | 55.7 % 00046 cpufreq 1400 Mhz | 48.7 % 00047 cpufreq 1200 Mhz | 41.7 % 00048 cpufreq 1000 Mhz | 34.5 % 00049 cpufreq 800 Mhz | 27.5 % 00050 cpufreq Frequency | Speed 00051 cpufreq -----------+--------- 00052 cpufreq 2.45 Ghz | 97.7 % 00053 cpufreq 2.45 Ghz | 84.4 % 00054 cpufreq 2.05 Ghz | 69.6 % 00055 cpufreq 1.85 Ghz | 62.6 % 00056 cpufreq 1.65 Ghz | 55.9 % 00057 cpufreq 1400 Mhz | 48.7 % 00058 cpufreq 1200 Mhz | 41.7 % 00059 cpufreq 1000 Mhz | 34.7 % 00060 cpufreq 800 Mhz | 27.8 % 00061 cpufreq Frequency | Speed 00062 cpufreq -----------+--------- 00063 cpufreq 2.45 Ghz | 100.0 % 00064 cpufreq 2.45 Ghz | 82.6 % 00065 cpufreq 2.05 Ghz | 67.8 % 00066 cpufreq 1.85 Ghz | 61.4 % 00067 cpufreq 1.65 Ghz | 54.9 % 00068 cpufreq 1400 Mhz | 48.3 % 00069 cpufreq 1200 Mhz | 41.1 % 00070 cpufreq 1000 Mhz | 34.3 % 00071 cpufreq 800 Mhz | 27.4 % 00072 cpufreq Frequency | Speed 00073 cpufreq -----------+--------- 00074 cpufreq 2.45 Ghz | 96.2 % 00075 cpufreq 2.45 Ghz | 82.5 % 00076 cpufreq 2.05 Ghz | 69.3 % 00077 cpufreq 1.85 Ghz | 62.7 % 00078 cpufreq 1.65 Ghz | 55.0 % 00079 cpufreq 1400 Mhz | 47.4 % 00080 cpufreq 1200 Mhz | 41.1 % 00081 cpufreq 1000 Mhz | 34.0 % 00082 cpufreq 800 Mhz | 27.2 % 00083 cpufreq Frequency | Speed 00084 cpufreq -----------+--------- 00085 cpufreq 2.45 Ghz | 96.5 % 00086 cpufreq 2.45 Ghz | 83.6 % 00087 cpufreq 2.05 Ghz | 68.1 % 00088 cpufreq 1.85 Ghz | 61.7 % 00089 cpufreq 1.65 Ghz | 54.9 % 00090 cpufreq 1400 Mhz | 48.0 % 00091 cpufreq 1200 Mhz | 41.1 % 00092 cpufreq 1000 Mhz | 34.2 % 00093 cpufreq 800 Mhz | 27.8 % 00094 cpufreq Frequency | Speed 00095 cpufreq -----------+--------- 00096 cpufreq 2.45 Ghz | 96.4 % 00097 cpufreq 2.45 Ghz | 82.6 % 00098 cpufreq 2.05 Ghz | 68.8 % 00099 cpufreq 1.85 Ghz | 60.5 % 00100 cpufreq 1.65 Ghz | 52.4 % 00101 cpufreq 1400 Mhz | 48.8 % 00102 cpufreq 1200 Mhz | 41.1 % 00103 cpufreq 1000 Mhz | 34.2 % 00104 cpufreq 800 Mhz | 26.4 % 00105 cpufreq Frequency | Speed 00106 cpufreq -----------+--------- 00107 cpufreq 2.45 Ghz | 95.3 % 00108 cpufreq 2.45 Ghz | 82.5 % 00109 cpufreq 2.05 Ghz | 65.5 % 00110 cpufreq 1.85 Ghz | 62.8 % 00111 cpufreq 1.65 Ghz | 54.8 % 00112 cpufreq 1400 Mhz | 48.0 % 00113 cpufreq 1200 Mhz | 41.2 % 00114 cpufreq 1000 Mhz | 34.2 % 00115 cpufreq 800 Mhz | 27.3 % 00116 cpufreq Frequency | Speed 00117 cpufreq -----------+--------- 00118 cpufreq 2.45 Ghz | 96.3 % 00119 cpufreq 2.45 Ghz | 83.4 % 00120 cpufreq 2.05 Ghz | 68.3 % 00121 cpufreq 1.85 Ghz | 61.9 % 00122 cpufreq 1.65 Ghz | 54.9 % 00123 cpufreq 1400 Mhz | 48.0 % 00124 cpufreq 1200 Mhz | 41.1 % 00125 cpufreq 1000 Mhz | 34.2 % 00126 cpufreq 800 Mhz | 27.3 % 00127 cpufreq Frequency | Speed 00128 cpufreq -----------+--------- 00129 cpufreq 2.45 Ghz | 100.0 % 00130 cpufreq 2.45 Ghz | 77.9 % 00131 cpufreq 2.05 Ghz | 64.6 % 00132 cpufreq 1.85 Ghz | 54.0 % 00133 cpufreq 1.65 Ghz | 51.7 % 00134 cpufreq 1400 Mhz | 45.2 % 00135 cpufreq 1200 Mhz | 39.0 % 00136 cpufreq 1000 Mhz | 33.1 % 00137 cpufreq 800 Mhz | 25.5 % 00138 cpufreq Frequency | Speed 00139 cpufreq -----------+--------- 00140 cpufreq 2.45 Ghz | 93.4 % 00141 cpufreq 2.45 Ghz | 75.7 % 00142 cpufreq 2.05 Ghz | 64.5 % 00143 cpufreq 1.85 Ghz | 59.1 % 00144 cpufreq 1.65 Ghz | 51.4 % 00145 cpufreq 1400 Mhz | 45.9 % 00146 cpufreq 1200 Mhz | 39.3 % 00147 cpufreq 1000 Mhz | 32.7 % 00148 cpufreq 800 Mhz | 25.8 % 00149 cpufreq Frequency | Speed 00150 cpufreq -----------+--------- 00151 cpufreq 2.45 Ghz | 92.1 % 00152 cpufreq 2.45 Ghz | 78.1 % 00153 cpufreq 2.05 Ghz | 65.7 % 00154 cpufreq 1.85 Ghz | 58.6 % 00155 cpufreq 1.65 Ghz | 52.5 % 00156 cpufreq 1400 Mhz | 45.7 % 00157 cpufreq 1200 Mhz | 39.3 % 00158 cpufreq 1000 Mhz | 32.7 % 00159 cpufreq 800 Mhz | 24.3 % 00160 cpufreq Frequency | Speed 00161 cpufreq -----------+--------- 00162 cpufreq 2.45 Ghz | 88.9 % 00163 cpufreq 2.45 Ghz | 79.8 % 00164 cpufreq 2.05 Ghz | 58.4 % 00165 cpufreq 1.85 Ghz | 52.6 % 00166 cpufreq 1.65 Ghz | 46.9 % 00167 cpufreq 1400 Mhz | 41.0 % 00168 cpufreq 1200 Mhz | 35.1 % 00169 cpufreq 1000 Mhz | 29.1 % 00170 cpufreq 800 Mhz | 22.9 % 00171 cpufreq Frequency | Speed 00172 cpufreq -----------+--------- 00173 cpufreq 2.45 Ghz | 92.8 % 00174 cpufreq 2.45 Ghz | 80.1 % 00175 cpufreq 2.05 Ghz | 66.2 % 00176 cpufreq 1.85 Ghz | 59.5 % 00177 cpufreq 1.65 Ghz | 52.9 % 00178 cpufreq 1400 Mhz | 46.2 % 00179 cpufreq 1200 Mhz | 39.5 % 00180 cpufreq 1000 Mhz | 32.9 % 00181 cpufreq 800 Mhz | 26.3 % 00182 cpufreq Frequency | Speed 00183 cpufreq -----------+--------- 00184 cpufreq 2.45 Ghz | 92.9 % 00185 cpufreq 2.45 Ghz | 79.5 % 00186 cpufreq 2.05 Ghz | 66.2 % 00187 cpufreq 1.85 Ghz | 59.6 % 00188 cpufreq 1.65 Ghz | 52.9 % 00189 cpufreq 1400 Mhz | 46.7 % 00190 cpufreq 1200 Mhz | 39.6 % 00191 cpufreq 1000 Mhz | 32.9 % 00192 cpufreq 800 Mhz | 26.3 % 00193 cpufreq FAILED [MEDIUM] CPUFreqCPUsSetToSW_ANY: Test 1, Processors 00194 cpufreq are set to SW_ANY. 00195 cpufreq FAILED [MEDIUM] CPUFreqSW_ANY: Test 1, Firmware not 00196 cpufreq implementing hardware coordination cleanly. Firmware using 00197 cpufreq SW_ANY instead?. 00198 cpufreq 00199 cpufreq ========================================================= 00200 cpufreq 0 passed, 2 failed, 0 warnings, 0 aborted, 0 skipped, 0 00201 cpufreq info only. 00202 cpufreq ========================================================= 00204 summary 00205 summary 0 passed, 2 failed, 0 warnings, 0 aborted, 0 skipped, 0 00206 summary info only. 00207 summary 00208 summary Test Failure Summary 00209 summary ==================== 00210 summary 00211 summary Critical failures: NONE 00212 summary 00213 summary High failures: NONE 00214 summary 00215 summary Medium failures: 2 00216 summary cpufreq test, at 1 log line: 193 00217 summary "Processors are set to SW_ANY." 00218 summary cpufreq test, at 1 log line: 195 00219 summary "Firmware not implementing hardware coordination cleanly. Firmware using SW_ANY instead?." 00220 summary 00221 summary Low failures: NONE 00222 summary 00223 summary Other failures: NONE 00224 summary 00225 summary Test |Pass |Fail |Abort|Warn |Skip |Info | 00226 summary ---------------+-----+-----+-----+-----+-----+-----+ 00227 summary cpufreq | | 2| | | | | 00228 summary ---------------+-----+-----+-----+-----+-----+-----+ 00229 summary Total: | 0| 2| 0| 0| 0| 0| 00230 summary ---------------+-----+-----+-----+-----+-----+-----+ rabbit@rabbit-Lenovo-V570:~$

    Read the article

  • FOSUserBundle override mapping to remove need for username

    - by musoNic80
    I want to remove the need for a username in the FOSUserBundle. My users will login using an email address only and I've added real name fields as part of the user entity. I realised that I needed to redo the entire mapping as described here. I think I've done it correctly but when I try to submit the registration form I get the error: "Only field names mapped by Doctrine can be validated for uniqueness." The strange thing is that I haven't tried to assert a unique constraint to anything in the user entity. Here is my full user entity file: <?php // src/MyApp/UserBundle/Entity/User.php namespace MyApp\UserBundle\Entity; use FOS\UserBundle\Model\User as BaseUser; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity * @ORM\Table(name="depbook_user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your first name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $firstName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your last name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $lastName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your email address.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) * @Assert\Email(groups={"Registration"}) */ protected $email; /** * @ORM\Column(type="string", length=255, name="email_canonical", unique=true) */ protected $emailCanonical; /** * @ORM\Column(type="boolean") */ protected $enabled; /** * @ORM\Column(type="string") */ protected $salt; /** * @ORM\Column(type="string") */ protected $password; /** * @ORM\Column(type="datetime", nullable=true, name="last_login") */ protected $lastLogin; /** * @ORM\Column(type="boolean") */ protected $locked; /** * @ORM\Column(type="boolean") */ protected $expired; /** * @ORM\Column(type="datetime", nullable=true, name="expires_at") */ protected $expiresAt; /** * @ORM\Column(type="string", nullable=true, name="confirmation_token") */ protected $confirmationToken; /** * @ORM\Column(type="datetime", nullable=true, name="password_requested_at") */ protected $passwordRequestedAt; /** * @ORM\Column(type="array") */ protected $roles; /** * @ORM\Column(type="boolean", name="credentials_expired") */ protected $credentialsExpired; /** * @ORM\Column(type="datetime", nullable=true, name="credentials_expired_at") */ protected $credentialsExpiredAt; public function __construct() { parent::__construct(); // your own logic } /** * @return string */ public function getFirstName() { return $this->firstName; } /** * @return string */ public function getLastName() { return $this->lastName; } /** * Sets the first name. * * @param string $firstname * * @return User */ public function setFirstName($firstname) { $this->firstName = $firstname; return $this; } /** * Sets the last name. * * @param string $lastname * * @return User */ public function setLastName($lastname) { $this->lastName = $lastname; return $this; } } I've seen various suggestions about this but none of the suggestions seem to work for me. The FOSUserBundle docs are very sparse about what must be a very common request.

    Read the article

  • how get validation messages from mangomapper using rails console ?

    - by Alex
    Hi, I am basically teaching myself how to use RoR and MongoDB at the same time. I am following the very good book / tutorial : http://railstutorial.org/ I decided to replace Sqlite3 by MongoDB using the mongomapper gem. Everything works out about alright, but I am having some non-blocking little issues that I truly wish I could get rid of. In chapter 6, when working with validation I got 2 issues: - I don't know how to get the validations messages back like with Sqlite3. The "standard" code is: $ rails console --sandbox >> user = User.new(:name => "", :email => "[email protected]") >> user.save => false >> user.valid? => false >> user.errors.full_messages => ["Name can't be blank"] but if I try to do the same with MongoMapper, it throws an error saying that errors is undefined function. So does it mean that this is simply not implemented in mongomapper / mongo driver ? Or is there some other clever way to do this that I could not figure ? Additional, 2 things here: - I following the exemple in the book to the line, so I was expecting to be able to use the console in sandbox mode, but apparently that does not work either: (...)ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/console/sandbox.rb:1:in `<top (required)>': uninitialized constant ActiveRecord (NameError) from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:226:in `initialize_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/application.rb:153:in `load_console' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:26:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands/console.rb:8:in `start' from /Users/Alex/.rvm/gems/ruby-1.9.2-p136@rails3/gems/railties-3.0.3/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Also, in the book they call "user" but I need to call "User" (note the capital U) why is that ? Is it like mangomapper does not follow the Ruby naming convention or something ? And finally, I am trying to validate the field email with a regex as shown in the tutorial. It does not throws any errors at the code, but whenever I try to insert it just won't ever accept it unless I comment out the :format option... class User include MongoMapper::Document key :name, String, :required => true, :length => { :maximum => 50 } key :email, String, :required => true, # :format => { :with => email_regex }, :uniqueness => { :case_sentitive => false} timestamps! end Any advices you can provide on those topics would help me a lot ! Thanks, Alex

    Read the article

  • Why can't Java servlet sent out an object ?

    - by Frank
    I use the following method to send out an object from a servlet : public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { String Full_URL=request.getRequestURL().append("?"+request.getQueryString()).toString(); String Contact_Id=request.getParameter("Contact_Id"); String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Write_Serialized_XML(response.getOutputStream(),results.get(0)); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } } /** Writes the object and CLOSES the stream. Uses the persistance delegate registered in this class. * @param os The stream to write to. * @param o The object to be serialized. */ public static void writeXMLObject(OutputStream os,Object o) { // Classloader reference must be set since netBeans uses another class loader to loead the bean wich will fail in some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Check_License_Servlet.class.getClassLoader()); XMLEncoder encoder=new XMLEncoder(os); encoder.setExceptionListener(new ExceptionListener() { public void exceptionThrown(Exception e) { e.printStackTrace(); }}); encoder.writeObject(o); encoder.flush(); encoder.close(); Thread.currentThread().setContextClassLoader(oldClassLoader); } private static ByteArrayOutputStream writeOutputStream=new ByteArrayOutputStream(16384); /** Writes an object to XML. * @param out The boject out to write to. [ Will not be closed. ] * @param o The object to write. */ public static synchronized void writeAsXML(ObjectOutput out,Object o) throws IOException { writeOutputStream.reset(); writeXMLObject(writeOutputStream,o); byte[] Bt_1=writeOutputStream.toByteArray(); byte[] Bt_2=new Des_Encrypter().encrypt(Bt_1,Key); out.writeInt(Bt_2.length); out.write(Bt_2); out.flush(); out.close(); } public static synchronized void Write_Serialized_XML(OutputStream Output_Stream,Object o) throws IOException { writeAsXML(new ObjectOutputStream(Output_Stream),o); } At the receiving end the code look like this : File_Url="http://"+Site_Url+App_Dir+File_Name; try { Contact_Info_Entry Online_Contact_Entry=(Contact_Info_Entry)Read_Serialized_XML(new URL(File_Url)); } catch (Exception e) { e.printStackTrace(); } private static byte[] readBuf=new byte[16384]; public static synchronized Object readAsXML(ObjectInput in) throws IOException { // Classloader reference must be set since netBeans uses another class loader to load the bean which will fail under some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Tool_Lib_Simple.class.getClassLoader()); int length=in.readInt(); readBuf=new byte[length]; in.readFully(readBuf,0,length); byte Bt[]=new Des_Encrypter().decrypt(readBuf,Key); XMLDecoder dec=new XMLDecoder(new ByteArrayInputStream(Bt,0,Bt.length)); Object o=dec.readObject(); Thread.currentThread().setContextClassLoader(oldClassLoader); in.close(); return o; } public static synchronized Object Read_Serialized_XML(URL File_Url) throws IOException { return readAsXML(new ObjectInputStream(File_Url.openStream())); } But I can't get the object from the Java app that's on the receiving end, why ? The error messages look like this : java.lang.ClassNotFoundException: PayPal_Monitor.Contact_Info_Entry Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ...

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • Getting this CSS to work in IE6

    - by jerrygarciuh
    Hi folks, Working on this page: http://www.karlsenner.dreamhosters.com/about.php and having trouble with the navigation in IE6. It validates as XHTML 1.0 Transitional. Works great in FF, IE 8, Chrome, and Windows Safari. In IE6 and Opera 10 the drop menus appear too high. I tried adding in the different versions of http://code.google.com/p/ie7-js/ but it did not solve the issue in IE. The CSS looks like this: #wrapper { position: relative; display: block; background-color: inherit; margin: 0px auto; padding: 0; width: 900px; min-height: 900px; } #nav {} .navImage { position:relative; display:inline; height:102px; /* added in hopes of helping IE position but no dice */ } .subMenu { position:absolute; z-index:10; background-color:#FFF; top: 14px; left:0; } .subMenu a:link, .subMenu a:visited, .subMenu a:active{ display:block; width:90%; padding:6px; margin:0; color:#3CF; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } .subMenu a:hover{ display:block; width:90%; padding:6px; margin:0; color:#3CF; background-color:#CCC; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } jQuery rollovers: $('#navcompany').hover(function () { $('#companyMenu').css('display', 'block'); $('#companyImg').attr('src','g/nav/company_over.gif'); }, function () { $('#companyMenu').css('display', 'none'); $('#companyImg').attr('src','g/nav/company.gif'); }); And one of the cells. Since the menu is coming out of PHP and IE was not respecting the widths I just use PHP to get the nav image widths and write them to styles on the fly. Solved the width issue as IE acted like they should inherit their width from the wrapper. This may be a clue as to why they don't appear below their nav images but I can't sort it. <div id="navcompany" class="navImage" style="width:128px"> <a href="about.php"> <img src="g/nav/company_over.gif" name="companyImg" width="128" height="102" border="0" id="companyImg" alt="company" /> </a> <div id="companyMenu" class="subMenu" style="display:none; width:128px"> <a href="about.php">About us</a> <a href="location.php">Our location</a> </div> </div> Any advice greatly appreciated! JG

    Read the article

  • jQuery DataTables server side processing and ASP.Net

    - by Chad
    I'm trying to use the server side functionality of the jQuery Datatables plugin with ASP.Net. The ajax request is returning valid JSON, but nothing is showing up in the table. I originally had problems with the data I was sending in the ajax request. I was getting a "Invalid JSON primative" error. I discovered that the data needs to be in a string instead of JSON serialized, as described in this post: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/. I wasn't quite sure how to fix that, so I tried adding this in the ajax request: "data": "{'sEcho': '" + aoData.sEcho + "'}" If the aboves eventually works I'll add the other parameters later. Right now I'm just trying to get something to show up in my table. The returning JSON looks ok and validates, but the sEcho in the post is undefined, and I think thats why no data is being loaded into the table. So, what am I doing wrong? Am I even on the right track or am I being stupid? Does anyone ran into this before or have any suggestions? Here's my jQuery: $(document).ready(function() { $("#grid").dataTable({ "bJQueryUI": true, "sPaginationType": "full_numbers", "bServerSide":true, "sAjaxSource": "GridTest.asmx/ServerSideTest", "fnServerData": function(sSource, aoData, fnCallback) { $.ajax({ "type": "POST", "dataType": 'json', "contentType": "application/json; charset=utf-8", "url": sSource, "data": "{'sEcho': '" + aoData.sEcho + "'}", "success": fnCallback }); } }); }); HTML: <table id="grid"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>UserID</th> </tr> </thead> <tbody> <tr> <td colspan="5" class="dataTables_empty">Loading data from server</td> </tr> </tbody> </table> Webmethod: <WebMethod()> _ Public Function ServerSideTest() As Data Dim list As New List(Of String) list.Add("testing") list.Add("chad") list.Add("testing") Dim container As New List(Of List(Of String)) container.Add(list) list = New List(Of String) list.Add("testing2") list.Add("chad") list.Add("testing") container.Add(list) HttpContext.Current.Response.ContentType = "application/json" Return New Data(HttpContext.Current.Request("sEcho"), 2, 2, container) End Function Public Class Data Private _iTotalRecords As Integer Private _iTotalDisplayRecords As Integer Private _sEcho As Integer Private _sColumns As String Private _aaData As List(Of List(Of String)) Public Property sEcho() As Integer Get Return _sEcho End Get Set(ByVal value As Integer) _sEcho = value End Set End Property Public Property iTotalRecords() As Integer Get Return _iTotalRecords End Get Set(ByVal value As Integer) _iTotalRecords = value End Set End Property Public Property iTotalDisplayRecords() As Integer Get Return _iTotalDisplayRecords End Get Set(ByVal value As Integer) _iTotalDisplayRecords = value End Set End Property Public Property aaData() As List(Of List(Of String)) Get Return _aaData End Get Set(ByVal value As List(Of List(Of String))) _aaData = value End Set End Property Public Sub New(ByVal sEcho As Integer, ByVal iTotalRecords As Integer, ByVal iTotalDisplayRecords As Integer, ByVal aaData As List(Of List(Of String))) If sEcho <> 0 Then Me.sEcho = sEcho Me.iTotalRecords = iTotalRecords Me.iTotalDisplayRecords = iTotalDisplayRecords Me.aaData = aaData End Sub Returned JSON: {"__type":"Data","sEcho":0,"iTotalRecords":2,"iTotalDisplayRecords":2,"aaData":[["testing","chad","testing"],["testing2","chad","testing"]]}

    Read the article

  • Validate java SAML signature from C#

    - by Adrya
    How can i validate in .Net C# a SAML signature created in Java? Here is the SAML Signature that i get from Java: <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"> </ds:SignatureMethod> <ds:Reference URI="#_e8bcba9d1c76d128938bddd5ae8c68e1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"> </ds:Transform> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="code ds kind rw saml samlp typens #default xsd xsi"> </ec:InclusiveNamespaces> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"> </ds:DigestMethod> <ds:DigestValue>zEL7mB0Wkl+LtjMViO1imbucXiE=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> jpIX3WbX9SCFnqrpDyLj4TeJN5DGIvlEH+o/mb9M01VGdgFRLtfHqIm16BloApUPg2dDafmc9DwL Pyvs3TJ/hi0Q8f0ucaKdIuw+gBGxWFMcj/U68ZuLiv7U+Qe7i4ZA33rWPorkE82yfMacGf6ropPt v73mC0bpBP1ubo5qbM4= </ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate> MIIDBDCCAeygAwIBAgIIC/ktBs1lgYcwDQYJKoZIhvcNAQEFBQAwNzERMA8GA1UEAwwIQWRtaW5D QTExFTATBgNVBAoMDEVKQkNBIFNhbXBsZTELMAkGA1UEBhMCU0UwHhcNMDkwMjIzMTAwMzEzWhcN MTgxMDE1MDkyNTQyWjBaMRQwEgYDVQQDDAsxMC41NS40MC42MTEbMBkGA1UECwwST24gRGVtYW5k IFBsYXRmb3JtMRIwEAYDVQQLDAlPbiBEZW1hbmQxETAPBgNVBAsMCFNvZnR3YXJlMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQCk5EqiedxA6WEE9N2vegSCqleFpXMfGplkrcPOdXTRLLOuRgQJ LEsOaqspDFoqk7yJgr7kaQROjB9OicSH7Hhsu7HbdD6N3ntwQYoeNZ8nvLSSx4jz21zvswxAqw1p DoGl3J6hks5owL4eYs2yRHvqgqXyZoxCccYwc4fYzMi42wIDAQABo3UwczAdBgNVHQ4EFgQUkrpk yryZToKXOXuiU2hNsKXLbyIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSiviFUK7DUsjvByMfK g+pm4b2s7DAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQEF BQADggEBAKb94tnK2obEyvw8ZJ87u7gvkMxIezpBi/SqXTEBK1by0NHs8VJmdDN9+aOvC5np4fOL fFcRH++n6fvemEGgIkK3pOmNL5WiPpbWxrx55Yqwnr6eLsbdATALE4cgyZWHl/E0uVO2Ixlqeygw XTfg450cCWj4yfPTVZ73raKaDTWZK/Tnt7+ulm8xN+YWUIIbtW3KBQbGomqOzpftALyIKLVtBq7L J0hgsKGHNUnssWj5dt3bYrHgzaWLlpW3ikdRd67Nf0c1zOEgKHNEozrtRKiLLy+3bIiFk0CHImac 1zeqLlhjrG3OmIsIjxc1Vbc0+E+z6Unco474oSGf+D1DO+Y= </ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> I know to parse SAML, i need to validate the signature. I tried this: public bool VerifySignature() { X509Certificate2 certificate = null; XmlDocument doc = new XmlDocument(); XmlElement xmlAssertionElement = this.GetXml(doc); doc.AppendChild(xmlAssertionElement); // Create a new SignedXml object and pass it // the XML document class. SamlSignedXml signedXml = new SamlSignedXml(xmlAssertionElement); // Get signature XmlElement xmlSignature = this.Signature; if (xmlSignature == null) { return false; } // Load the signature node. signedXml.LoadXml(xmlSignature); // Get the certificate used to sign the assertion if information about this // certificate is available in the signature of the assertion. foreach (KeyInfoClause clause in signedXml.KeyInfo) { if (clause is KeyInfoX509Data) { if (((KeyInfoX509Data)clause).Certificates.Count > 0) { certificate = (X509Certificate2)((KeyInfoX509Data)clause).Certificates[0]; } } } if (certificate == null) { return false; } return signedXml.CheckSignature(certificate, true); } It validates the signature of a SAML signed in .Net but not of this Java one.

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • Form submit in javascript

    - by zac
    Hi been wrestling this form and the last step (actually submitting) has me scratching my head. What I have so far is the form: <form id="theForm" method='post' name="emailForm"> <table border="0" cellspacing="2"> <td>Email <span class="red">*</span></td><td><input type='text'class="validate[required,custom[email]]" size="30"></td></tr> <td>First Name:</td><td><input type='text' name='email[first]' id='first_name' size=30></td></tr> <tr height="30"> <td cellpadding="4">Last Name:</td><td><input type='text' name='email[last]' id='e_last_name' size=30> <td>Birthday</td> <td><select name='month' style='width:70px; margin-right: 10px'> <option value=''>Month</option> <option value="1">Jan</option> <option value="2">Feb</option> .... </select><select name='day' style='width:55px; margin-right: 10px'> <option value=''>Day</option> <option value="1">1</option> <option value="2">2</option> ... <option value="31">31</option> </select><select name='year' style='width:60px;' > <option value=''>Year</option> <option value="2007">2007</option> <option value="2006">2006</option> <option value="2005">2005</option> ... </select> <input type='image' src='{{skin url=""}}images/email/signUpButt.gif' value='Submit' onClick="return checkAge()" /> <input type="hidden" id= "under13" name="under13" value="No"> and a script that checks the age and sets a cookie/changes display function checkAge() { var min_age = 13; var year = parseInt(document.forms["emailForm"]["year"].value); var month = parseInt(document.forms["emailForm"]["month"].value) - 1; var day = parseInt(document.forms["emailForm"]["day"].value); var theirDate = new Date((year + min_age), month, day); var today = new Date; if ( (today.getTime() - theirDate.getTime()) < 0) { var el = document.getElementById('emailBox'); if(el){ el.className += el.className ? ' youngOne' : 'youngOne'; } document.getElementById('emailBox').innerHTML = "<style type=\"text/css\">.formError {display:none}</style><p>Good Bye</p><p>You must be 13 years of age to sign up.</p>"; createCookie('age','not13',0) return false; } else { createCookie('age','over13',0) return true; }} that all seems to be working well.. just missing kind of a crucial step of actually submitting the form if it validates (if they pass the age question). So I am thinking that this will be wrapped in that script.. something in here : else { createCookie('age','over13',0) return true; } Can someone please help me figure out how I could handle this submit?

    Read the article

  • Pass existing model into AJAX PartialViewResult

    - by Joe
    I’m using AJAX to asynchronously update a partial view and need to pass the existing model into the partial view. Controller Action public ActionResult Edit(int id) { var vM = new MyViewModel(); // vM is viewModel return View(vM); } Edit View @using (Html.BeginForm()) { @Html.ValidationSummary(true) ... <span id = "Ship"> @Html.Partial("AJAX_Views/_Ship")</span> _Ship Partial View @model MyProject.ViewModels.MyViewModel <table class="detailtable" style="min-width:398px"> <tr> <th style="padding-left:132px" colspan="2"> <span class="editor-label">Shipping&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Address</span> <span class="editor-label" style="padding-left:55px"> @Ajax.ActionLink("Delete", "SHIPDEL", new AjaxOptions { UpdateTargetId = "Ship", InsertionMode = InsertionMode.Replace, HttpMethod = "Get" })</span> <tr><th style="min-width:152px"><span class="editor-label">Street:</span></th> @Html.HiddenFor(m => m.CmpAdrsSh.Id) @Html.HiddenFor(m => m.CmpAdrsSh.CompPersonId) @Html.HiddenFor(m => m.CmpAdrsSh.IsShip) <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.Street) @Html.ValidationMessageFor(m => m.CmpAdrsSh.Street) </span></td></tr> <tr><th><span class="editor-label">City:</span></th> <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.City) @Html.ValidationMessageFor(m => m.CmpAdrsSh.City) </span></td></tr> <tr><th><span class="editor-label">State:</span></th> <td><span class="editor-field">@Html.DropDownList("CmpAdrsSh.State", (IEnumerable<SelectListItem>)ViewBag._State) @Html.ValidationMessageFor(m => m.CmpAdrsSh.State) </span></td></tr> <tr><th><span class="editor-label">Zip:</span></th> <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.Zip) @Html.ValidationMessageFor(m => m.CmpAdrsSh.Zip) </span></td></tr> _ShipDel Partial View @model MyProject.ViewModels.MyViewModel <table class="detailtable" style="min-width:398px"> <tr> <th style="padding-left:132px" colspan="2"> <span class="editor-label">Shipping&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Address</span> <span class="editor-label" style="padding-left:10px; color:red">Marked for Deletion.</span></th></tr> <tr><td>To not Delete Select Cancel below!</td></tr> @Html.HiddenFor(m => m.CmpAdrsSh.Street) @Html.HiddenFor(m => m.CmpAdrsSh.City) @Html.HiddenFor(m => m.CmpAdrsSh.State) @Html.HiddenFor(m => m.CmpAdrsSh.Zip) Controller PartialViewResult Action public PartialViewResult SHIPDEL() { return PartialView("AJAX_Views/_ShipDel"); } I tried adding this.ModelState to the Action but then the view will not render. I'm guessing I somehow have to pass the model to the SHIPDEL Action first. I couldn't find an @Ajax.ActionLink overload that would allow this. public PartialViewResult SHIPDEL() { return PartialView("AJAX_Views/_ShipDel", this.ModelState); } In the _ShipDel Partial View I need to expose the CmpAdrsSh properties so the model validates in the POST Action. The model is empty at this point. How do I pass the existing vM model into the _ShipDel partial view? Thank you,

    Read the article

  • JQuery Checkbox with Textbox Validation

    - by Volrath
    I am using Jorn's validation plugin. I have a a group of checkboxes beside a group of textboxes. The textboxes are disabled by default and will enable when the matching checkbox is checked. At least 1 checkbox has to be checked which is not a problem. However, when I check more than 2 checkboxes only 1 textbox validates. The form still submits even when the second checkbox is empty. $count = 0; while($row = mysql_fetch_array($rs)) { ?> <tr> <td> <label> <input type="checkbox" name="tDays[]" id="tDays<?php echo $count; ?>" value="<?php echo $row['promoDayID'];?>" onClick="enableTxt();" <?php if((isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) || (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours))) { echo "checked='checked'"; }?> validate="required:true" /> <?php echo $row['promoDay'];?>: </label> </td> <td align="right"> <input type="textbox" size="45" style="font-size:12px" name="tHours[]" id="tHours<?php echo $count; ?>" <?php if(isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) { echo "value='" .getHours($row['promoDayID'], $arrTDays) ."'"; } elseif (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours)) { echo "value='" .getHours($row['promoDayID'], $arrSelectedTHours). "'"; } else { echo "value='' disabled='disabled'"; }?> class="required" /> <label for="tHours[]" class="error" id="tHourserror<?php echo $count; ?>">Please enter the Trading Hour.</label> </td> </tr> <?php $count++; }//while ?> This is done using javascript: function enableTxt() { for (i = 0; i <= 7; i++) { if (document.getElementById("tDays" + i) != null && document.getElementById("tDays" + i).checked == true) { document.getElementById('tHours' + i).disabled = false; document.getElementById('tHourserror' + i).style.visibility = "visible"; } else if (document.getElementById("tDays" + i) != null) { document.getElementById('tHours' + i).disabled = "disabled"; document.getElementById('tHours' + i).value = ""; document.getElementById('tHourserror' + i).style.visibility = "hidden"; } } } Please kindly advise in detail as to how this problem can be solved. I am fairly weak in JQuery.

    Read the article

< Previous Page | 13 14 15 16 17 18  | Next Page >