Search Results

Search found 4118 results on 165 pages for 'attributes'.

Page 38/165 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Rails 3 ActiveModel Nested Class I18n

    - by Dave
    Given the following class definition in ruby: class Conversation class Message include ActiveModel::Validations attr_accessor :quantity validates :quantity, :presence => true end end How can you use i18n to customize to error message. For example the correct lookup for the class Conversation would be activemodel: errors: models: conversation: attributes: quantity: blank: "Some custom message" But what is it for the Message class? I tried: activemodel: errors: models: conversation: message: attributes: quantity: blank: "Some custom message" activemodel: errors: models: message: attributes: quantity: blank: "Some custom message" activemodel: errors: models: conversation::message: attributes: quantity: blank: "Some custom message" None of them work Any ideas or is this a bug with ActiveModel or I18n?

    Read the article

  • Cannot Generate ParameterSetMetadata While Programmatically Creating A Parameter Block

    - by Steven Murawski
    I'm trying to programmatically create a parameter block for a function ( along the lines of this blog post ). I'm starting with a CommandMetadata object (from an existing function). I can create the ParameterMetadata object and set things like the ParameterType, the name, as well as some attributes. The problem I'm running into is that when I use the GetParamBlock method of the ProxyCommand class, none of my attributes that I set in the Attributes collection of the ParameterMetadata are generated. The problem this causes is that when the GetParamBlock is called, the new parameter is not annotated with the appropriate Parameter attribute. Example: function test { [CmdletBinding()] param ( [Parameter()] $InitialParameter) Write-Host "I don't matter." } $MetaData = New-Object System.Management.Automation.CommandMetaData (get-command test) $NewParameter = New-Object System.Management.Automation.ParameterMetadata 'NewParameter' $NewParameter.ParameterType = [string[]] $Attribute = New-Object System.Management.Automation.ParameterAttribute $Attribute.Position = 1 $Attribute.Mandatory = $true $Attribute.ValueFromPipeline = $true $NewParameter.Attributes.Add($Attribute) $MetaData.Parameters.Add('NewParameter', $NewParameter) [System.Management.Automation.ProxyCommand]::GetParamBlock($MetaData)

    Read the article

  • What alternatives to __attribute__ exist on 64-bit kernels?

    - by Saifi Khan
    Hi: Is there any alternative to non-ISO gcc specific extension __attribute__ on 64-bit kernels ? Three types that i've noticed are: function attributes, type attributes and variable attributes. eg. i'd like to avoid using __attribute__((__packed__)) for structures passed over the network, even though some gcc based code do use it. Any suggestions or pointers on how to entirely avoid __attribute__ usage in C systems/kernel code ? thanks Saifi.

    Read the article

  • How to Correct & Improve the Design of this Code?

    - by DaveDev
    HI Guys, I've been working on a little experiement to see if I could create a helper method to serialize any of my types to any type of HTML tag I specify. I'm getting a NullReferenceException when _writer = _viewContext.Writer; is called in protected virtual void Dispose(bool disposing) {/*...*/} I think I'm at a point where it almost works (I've gotten other implementations to work) and I was wondering if somebody could point out what I'm doing wrong? Also, I'd be interested in hearing suggestions on how I could improve the design? So basically, I have this code that will generate a Select box with a number of options: // the idea is I can use one method to create any complete tag of any type // and put whatever I want in the content area <% using (Html.GenerateTag<SelectTag>(Model, new { href = Url.Action("ActionName") })) { %> <%foreach (var fund in Model.Funds) {%> <% using (Html.GenerateTag<OptionTag>(fund)) { %> <%= fund.Name %> <% } %> <% } %> <% } %> This Html.GenerateTag helper is defined as: public static MMTag GenerateTag<T>(this HtmlHelper htmlHelper, object elementData, object attributes) where T : MMTag { return (T)Activator.CreateInstance(typeof(T), htmlHelper.ViewContext, elementData, attributes); } Depending on the type of T it'll create one of the types defined below, public class HtmlTypeBase : MMTag { public HtmlTypeBase() { } public HtmlTypeBase(ViewContext viewContext, params object[] elementData) { base._viewContext = viewContext; base.MergeDataToTag(viewContext, elementData); } } public class SelectTag : HtmlTypeBase { public SelectTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("select"); //base.MergeDataToTag(viewContext, elementData); } } public class OptionTag : HtmlTypeBase { public OptionTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("option"); //base.MergeDataToTag(viewContext, _elementData); } } public class AnchorTag : HtmlTypeBase { public AnchorTag(ViewContext viewContext, params object[] elementData) { base._tag = new TagBuilder("a"); //base.MergeDataToTag(viewContext, elementData); } } all of these types (anchor, select, option) inherit from HtmlTypeBase, which is intended to perform base.MergeDataToTag(viewContext, elementData);. This doesn't happen though. It works if I uncomment the MergeDataToTag methods in the derived classes, but I don't want to repeat that same code for every derived class I create. This is the definition for MMTag: public class MMTag : IDisposable { internal bool _disposed; internal ViewContext _viewContext; internal TextWriter _writer; internal TagBuilder _tag; internal object[] _elementData; public MMTag() {} public MMTag(ViewContext viewContext, params object[] elementData) { } public void Dispose() { Dispose(true /* disposing */); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!_disposed) { _disposed = true; _writer = _viewContext.Writer; _writer.Write(_tag.ToString(TagRenderMode.EndTag)); } } protected void MergeDataToTag(ViewContext viewContext, object[] elementData) { Type elementDataType = elementData[0].GetType(); foreach (PropertyInfo prop in elementDataType.GetProperties()) { if (prop.PropertyType.IsPrimitive || prop.PropertyType == typeof(Decimal) || prop.PropertyType == typeof(String)) { object propValue = prop.GetValue(elementData[0], null); string stringValue = propValue != null ? propValue.ToString() : String.Empty; _tag.Attributes.Add(prop.Name, stringValue); } } var dic = new Dictionary<string, object>(StringComparer.OrdinalIgnoreCase); var attributes = elementData[1]; if (attributes != null) { foreach (PropertyDescriptor descriptor in TypeDescriptor.GetProperties(attributes)) { object value = descriptor.GetValue(attributes); dic.Add(descriptor.Name, value); } } _tag.MergeAttributes<string, object>(dic); _viewContext = viewContext; _viewContext.Writer.Write(_tag.ToString(TagRenderMode.StartTag)); } } Thanks Dave

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • How can I read out the CSS text via Javascript as defined in the stylesheet?

    - by Monokai
    I was thinking of using Javascript to automatically transform CSS3 attributes like border-radius, transform, box-shadow, etc. to their browser specific counterparts. I did some research and found that you can iterate over the stylesheets defined via document.styleSheets. You can find the CSS rules via document.styleSheets[0].cssRules[0].cssText. I want to modify the CSS rules that contain CSS3 attributes by injecting the browser specific attributes with the appropriate vendor-prefix, like -webkit-border-radius, moz-border-radius, etc. However, it seems that the cssText property is preprocessed in each browser, to filter out CSS attributes that it doesn't understand. That practically breaks this idea. Question: is there any way to retrieve the CSS text exactly as defined in the stylesheet? Or: is there another way to accomplish this via Javascript? I'd like to maintain clean CSS files without the need for defining each attribute multiple times for each specific browser.

    Read the article

  • Python 3: list atributes within a class object

    - by MadSc13ntist
    is there a way that if the following class is created; I can grab a list of attributes that exist. (this class is just an bland example, it is not my task at hand) class new_class(): def __init__(self, number): self.multi = int(number) * 2 self.str = str(number) a = new_class(2) print(', '.join(a.SOMETHING)) * the attempt is that "multi, str" will print. the point here is that if a class object has attributes added at different parts of a script that I can grab a quick listing of the attributes which are defined.

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Grep... What patterns to extract href attributes, etc. with PHP's preg_grep?

    - by inktri
    Hi, I'm having trouble with grep.. Which four patterns should I use with PHP's preg_grep to extract all instances the "____" stuff in the strings below? 1. <h2><a ....>_____</a></h2> 2. <cite><a href="_____" .... >...</a></cite> 3. <cite><a .... >________</a></cite> 4. <span>_________</span> The dots denote some arbitrary characters while the underscores denote what I want. An example string is: </style></head> <body><div id="adBlock"><h2><a href="https://www.google.com/adsense/support/bin/request.py?contact=afs_violation&amp;hl=en" target="_blank">Ads by Google</a></h2> <div class="ad"><div><a href="http://www.google.com/aclk?sa=L&amp;ai=C4vfT4Sa3S97SLYO8NN6F-ckB5oq5sAGg6PKlDaT-kwUQASCF4p8UKARQtobS9AVgyZbRhsijoBnIAQGqBBxP0OSEnIsuRIv3ZERDm8GiSKZSnjrVf1kVq-_Y&amp;num=1&amp;sig=AGiWqtwG1qHnwpZ_5BNrjrzzXO5Or6EDMg&amp;q=http://www.crackle.com/c/Spider-Man_The_New_Animated_Series/%3Futm_source%3Dgoogle%26utm_medium%3Dcpc%26utm_campaign%3DGST_10016_CRKL_US_PRD_S_TeleV_SPID_Tele_Spider-Man%26utm_term%3Dspiderman%26utm_content%3Ds264Yjg9f_3472685742_487lrz1638" class="titleLink" target="_parent">Spider-<b>Man</b> Animated Serie</a></div> <span>See Your Favorite Spiderman <br> Episodes for Free. Only on Crackle.</span> <cite><a href="http://www.google.com/aclk?sa=L&amp;ai=C4vfT4Sa3S97SLYO8NN6F-ckB5oq5sAGg6PKlDaT-kwUQASCF4p8UKARQtobS9AVgyZbRhsijoBnIAQGqBBxP0OSEnIsuRIv3ZERDm8GiSKZSnjrVf1kVq-_Y&amp;num=1&amp;sig=AGiWqtwG1qHnwpZ_5BNrjrzzXO5Or6EDMg&amp;q=http://www.crackle.com/c/Spider-Man_The_New_Animated_Series/%3Futm_source%3Dgoogle%26utm_medium%3Dcpc%26utm_campaign%3DGST_10016_CRKL_US_PRD_S_TeleV_SPID_Tele_Spider-Man%26utm_term%3Dspiderman%26utm_content%3Ds264Yjg9f_3472685742_487lrz1638" class="domainLink" target="_parent">www.Crackle.com/Spiderman</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=CnQFi4Sa3S97SLYO8NN6F-ckB3M7nQtyU2PQEq6bCBRACIIXinxQoBFCm15KB-f____8BYMmW0YbIo6AZoAHiq_X-A8gBAaoEIU_Q9JKLiy1MiwdnHpZoBnmpR1J8pP2jpTwMx2uj2nN4WA&amp;num=2&amp;sig=AGiWqtwDrI5pWBCncdDc80FKt32AJMAQ6A&amp;q=http://www.costumeexpress.com/browse/TV-Movies/_/N-1z141uu/Ntt-batman/results1.aspx%3FREF%3DKNC-CEgoogle" class="titleLink" target="_parent">Kids <b>Batman</b> Costumes</a></div> <span>Great Selection of <b>Batman</b> &amp; Batgirl <br> Costumes For Kids. Ships Same Day!</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=CnQFi4Sa3S97SLYO8NN6F-ckB3M7nQtyU2PQEq6bCBRACIIXinxQoBFCm15KB-f____8BYMmW0YbIo6AZoAHiq_X-A8gBAaoEIU_Q9JKLiy1MiwdnHpZoBnmpR1J8pP2jpTwMx2uj2nN4WA&amp;num=2&amp;sig=AGiWqtwDrI5pWBCncdDc80FKt32AJMAQ6A&amp;q=http://www.costumeexpress.com/browse/TV-Movies/_/N-1z141uu/Ntt-batman/results1.aspx%3FREF%3DKNC-CEgoogle" class="domainLink" target="_parent">www.CostumeExpress.com</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=CAMYT4Sa3S97SLYO8NN6F-ckB3ZnWmgGdoNLrDaumwgUQAyCF4p8UKARQrqSVxwdgyZbRhsijoBmgAZH77uwDyAEBqgQYT9DU7oqLLEyLB2dHlxZFnQzyeg-yHt88&amp;num=3&amp;sig=AGiWqtzqAphZ9DLDiEFBJlb0Ou_1HyEyyA&amp;q=http://www.OfficialBatmanCostumes.com" class="titleLink" target="_parent"><b>Batman</b> Costume</a></div> <span>Official <b>Batman</b> Costumes. <br> Huge Selection &amp; Same Day Shipping!</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=CAMYT4Sa3S97SLYO8NN6F-ckB3ZnWmgGdoNLrDaumwgUQAyCF4p8UKARQrqSVxwdgyZbRhsijoBmgAZH77uwDyAEBqgQYT9DU7oqLLEyLB2dHlxZFnQzyeg-yHt88&amp;num=3&amp;sig=AGiWqtzqAphZ9DLDiEFBJlb0Ou_1HyEyyA&amp;q=http://www.OfficialBatmanCostumes.com" class="domainLink" target="_parent">www.OfficialBatmanCostumes.com</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=C767t4Sa3S97SLYO8NN6F-ckBkZfSfoOppaMHq6bCBRAEIIXinxQoBFDX2bw6YMmW0YbIo6AZoAHpprP8A8gBAaoEG0_QhJSMiytMiwdnHpZoF3g0Uj8_Vl2r4TpI_g&amp;num=4&amp;sig=AGiWqtyGO2DnFq_jMhP6ufj8pufT9sWQWA&amp;q=http://www.discountsuperherocostumes.com/batman-costumes.html" class="titleLink" target="_parent">Discount <b>Batman</b> Costumes</a></div> <span>Discount adult and kids <b>batman</b> <br> superhero costumes.</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=C767t4Sa3S97SLYO8NN6F-ckBkZfSfoOppaMHq6bCBRAEIIXinxQoBFDX2bw6YMmW0YbIo6AZoAHpprP8A8gBAaoEG0_QhJSMiytMiwdnHpZoF3g0Uj8_Vl2r4TpI_g&amp;num=4&amp;sig=AGiWqtyGO2DnFq_jMhP6ufj8pufT9sWQWA&amp;q=http://www.discountsuperherocostumes.com/batman-costumes.html" class="domainLink" target="_parent">www.discountsuperherocostumes.com</a></cite></div></div></body> <script type="text/javascript"> var relay = ""; </script> <script type="text/javascript" src="/uds/?file=ads&amp;v=1&amp;packages=searchiframe&amp;nodependencyload=true"></script></html> Thanks!

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Silverlight and .NET 4 tools

    - by Fabrice Marguerie
    I've just added two new attributes to SharpToolbox.com: Built for Silverlight and Built for .NET 4. There are already more than 30 tools tagged as offering support for Silverlight, and 20 tools for .NET 4.You can search for tools, libraries and add-ins with these attributes using the search page. PS: if you have submitted tools, be patient, I have a lot to process...

    Read the article

  • Silverlight and .NET 4 tools

    I've just added two new attributes to SharpToolbox.com: Built for Silverlight and Built for .NET 4. There are already more than 30 tools tagged as offering support for Silverlight, and 20 tools for .NET 4.You can search for tools, libraries and add-ins with these attributes using the search page. PS: if you have submitted tools, be patient, I have a lot to process......Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • glGetActiveAttrib on Android NDK

    - by user408952
    In my code-base I need to link the vertex declarations from a mesh to the attributes of a shader. To do this I retrieve all the attribute names after linking the shader. I use the following code (with some added debug info since it's not really working): int shaders[] = { m_ps, m_vs }; if(linkProgram(shaders, 2)) { ASSERT(glIsProgram(m_program) == GL_TRUE, "program is invalid"); int attrCount = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTES, &attrCount)); int maxAttrLength = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttrLength)); LOG_INFO("shader", "got %d attributes for '%s' (%d) (maxlen: %d)", attrCount, name, m_program, maxAttrLength); m_attrs.reserve(attrCount); GLsizei attrLength = -1; GLint attrSize = -1; GLenum attrType = 0; char tmp[256]; for(int i = 0; i < attrCount; i++) { tmp[0] = 0; GL_CHECKED(glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)); LOG_INFO("shader", "%d: %d %d '%s'", i, attrLength, attrSize, tmp); m_attrs.append(String(tmp, attrLength)); } } GL_CHECKED is a macro that calls the function and calls glGetError() to see if something went wrong. This code works perfectly on Windows 7 using ANGLE and gives this this output: info:shader: got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) info:shader: 0: 7 1 'a_Color' info:shader: 1: 10 1 'a_Position' But on my Nexus 7 (1st gen) I get the following (the errors are the output from the GL_CHECKED macro): I/testgame:shader(30865): got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 0: -1 -1 '' E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 1: -1 -1 '' I.e. the call to glGetActiveAttrib gives me an INVALID_VALUE. The opengl docs says this about the possible errors: GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. This is not the case, I added an ASSERT to make sure glIsProgram(m_program) == GL_TRUE, and it doesn't trigger. GL_INVALID_OPERATION is generated if program is not a program object. Different error. GL_INVALID_VALUE is generated if index is greater than or equal to the number of active attribute variables in program. i is 0 and 1, and the number of active attribute variables are 2, so this isn't the case. GL_INVALID_VALUE is generated if bufSize is less than 0. Well, it's not zero, it's 256. Does anyone have an idea what's causing this? Am I just lucky that it works in ANGLE, or is the nvidia tegra driver wrong?

    Read the article

  • Get Info From Database, or Build Inferred Info?

    - by Zaemz
    Does it make more sense to store and retrieve properties or information directly related to an item in a database, or, say in such a case that a product's ID could describe information about it, should the information be gathered from that? Example: Item SKU -- 4HBU12 4 - is the number of motors H - the voltage B - the color, blue U - the model 12 - the length Should I store those individual attributes as well as the SKU, or should I store only the SKU and build the attributes from it?

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • OIM 11g notification framework

    - by Rajesh G Kumar
    OIM 11g has introduced an improved and template based Notifications framework. New release has removed the limitation of sending text based emails (out-of-the-box emails) and enhanced to support html features. New release provides in-built out-of-the-box templates for events like 'Reset Password', 'Create User Self Service' , ‘User Deleted' etc. Also provides new APIs to support custom templates to send notifications out of OIM. OIM notification framework supports notification mechanism based on events, notification templates and template resolver. They are defined as follows: Ø Events are defined as XML file and imported as part of MDS database in order to make notification event available for use. Ø Notification templates are created using OIM advance administration console. The template contains the text and the substitution 'variables' which will be replaced with the data provided by the template resolver. Templates support internationalization and can be defined as HTML or in form of simple text. Ø Template resolver is a Java class that is responsible to provide attributes and data to be used at runtime and design time. It must be deployed following the OIM plug-in framework. Resolver data provided at design time is to be used by end user to design notification template with available entity variables and it also provides data at runtime to replace the designed variable with value to be displayed to recipients. Steps to define custom notifications in OIM 11g are: Steps# Steps 1. Define the Notification Event 2. Create the Custom Template Resolver class 3. Create Template with notification contents to be sent to recipients 4. Create Event triggering spots in OIM 1. Notification Event metadata The Notification Event is defined as XML file which need to be imported into MDS database. An event file must be compliant with the schema defined by the notification engine, which is NotificationEvent.xsd. The event file contains basic information about the event.XSD location in MDS database: “/metadata/iam-features-notification/NotificationEvent.xsd”Schema file can be viewed by exporting file from MDS using weblogicExportMetadata.sh script.Sample Notification event metadata definition: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <Events xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation="../../../metadata/NotificationEvent.xsd"> 3: <EventType name="Sample Notification"> 4: <StaticData> 5: <Attribute DataType="X2-Entity" EntityName="User" Name="Granted User"/> 6: </StaticData> 7: <Resolver class="com.iam.oim.demo.notification.DemoNotificationResolver"> 8: <Param DataType="91-Entity" EntityName="Resource" Name="ResourceInfo"/> 9: </Resolver> 10: </EventType> 11: </Events> Line# Description 1. XML file notation tag 2. Events is root tag 3. EventType tag is to declare a unique event name which will be available for template designing 4. The StaticData element lists a set of parameters which allow user to add parameters that are not data dependent. In other words, this element defines the static data to be displayed when notification is to be configured. An example of static data is the User entity, which is not dependent on any other data and has the same set of attributes for all event instances and notification templates. Available attributes are used to be defined as substitution tokens in the template. 5. Attribute tag is child tag for StaticData to declare the entity and its data type with unique reference name. User entity is most commonly used Entity as StaticData. 6. StaticData closing tag 7. Resolver tag defines the resolver class. The Resolver class must be defined for each notification. It defines what parameters are available in the notification creation screen and how those parameters are replaced when the notification is to be sent. Resolver class resolves the data dynamically at run time and displays the attributes in the UI. 8. The Param DataType element lists a set of parameters which allow user to add parameters that are data dependent. An example of the data dependent or a dynamic entity is a resource object which user can select at run time. A notification template is to be configured for the resource object. Corresponding to the resource object field, a lookup is displayed on the UI. When a user selects the event the call goes to the Resolver class provided to fetch the fields that are displayed in the Available Data list, from which user can select the attribute to be used on the template. Param tag is child tag to declare the entity and its data type with unique reference name. 9. Resolver closing tag 10 EventType closing tag 11. Events closing tag Note: - DataType needs to be declared as “X2-Entity” for User entity and “91-Entity” for Resource or Organization entities. The dynamic entities supported for lookup are user, resource, and organization. Once notification event metadata is defined, need to be imported into MDS database. Fully qualified resolver class name need to be define for XML but do not need to load the class in OIM yet (it can be loaded later). 2. Coding the notification resolver All event owners have to provide a resolver class which would resolve the data dynamically at run time. Custom resolver class must implement the interface oracle.iam.notification.impl.NotificationEventResolver and override the implemented methods with actual implementation. It has 2 methods: S# Methods Descriptions 1. public List<NotificationAttribute> getAvailableData(String eventType, Map<String, Object> params); This API will return the list of available data variables. These variables will be available on the UI while creating/modifying the Templates and would let user select the variables so that they can be embedded as a token as part of the Messages on the template. These tokens are replaced by the value passed by the resolver class at run time. Available data is displayed in a list. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the entity name and the corresponding value for which available data is to be fetched. Sample code snippet: List<NotificationAttribute> list = new ArrayList<NotificationAttribute>(); long objKey = (Long) params.get("resource"); //Form Field details based on Resource object key HashMap<String, Object> formFieldDetail = getObjectFormName(objKey); for (Iterator<?> itrd = formFieldDetail.entrySet().iterator(); itrd.hasNext(); ) { NotificationAttribute availableData = new NotificationAttribute(); Map.Entry formDetailEntrySet = (Entry<?, ?>)itrd.next(); String fieldLabel = (String)formDetailEntrySet.getValue(); availableData.setName(fieldLabel); list.add(availableData); } return list; 2. Public HashMap<String, Object> getReplacedData(String eventType, Map<String, Object> params); This API would return the resolved value of the variables present on the template at the runtime when notification is being sent. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the base values such as usr_key, obj_key etc required by the resolver implementation to resolve the rest of the variables in the template. Sample code snippet: HashMap<String, Object> resolvedData = new HashMap<String, Object>();String firstName = getUserFirstname(params.get("usr_key"));resolvedData.put("fname", firstName); String lastName = getUserLastName(params.get("usr_key"));resolvedData.put("lname", lastname);resolvedData.put("count", "1 million");return resolvedData; This code must be deployed as per OIM 11g plug-in framework. The XML file defining the plug-in is as below: <?xml version="1.0" encoding="UTF-8"?> <oimplugins xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <plugins pluginpoint="oracle.iam.notification.impl.NotificationEventResolver"> <plugin pluginclass= " com.iam.oim.demo.notification.DemoNotificationResolver" version="1.0" name="Sample Notification Resolver"/> </plugins> </oimplugins> 3. Defining the template To create a notification template: Log in to the Oracle Identity Administration Click the System Management tab and then click the Notification tab From the Actions list on the left pane, select Create On the Create page, enter values for the following fields under the Template Information section: Template Name: Demo template Description Text: Demo template Under the Event Details section, perform the following: From the Available Event list, select the event for which the notification template is to be created from a list of available events. Depending on your selection, other fields are displayed in the Event Details section. Note that the template Sample Notification Event created in the previous step being used as the notification event. The contents of the Available Data drop down are based on the event XML StaticData tag, the drop down basically lists all the attributes of the entities defined in that tag. Once you select an element in the drop down, it will show up in the Selected Data text field and then you can just copy it and paste it into either the message subject or the message body fields prefixing $ symbol. Example if list has attribute like First_Name then message body will contains this as $First_Name which resolver will parse and replace it with actual value at runtime. In the Resource field, select a resource from the lookup. This is the dynamic data defined by the Param DataType element in the XML definition. Based on selected resource getAvailableData method of resolver will be called to fetch the resource object attribute detail, if method is overridden with required implementation. For current scenario, Map<String, Object> params will get populated with object key as value and key as “resource” in the map. This is the only input will be provided to resolver at design time. You need to implement the further logic to fetch the object attributes detail to populate the available Data list. List string should not have space in between, if object attributes has space for attribute name then implement logic to replace the space with ‘_’ before populating the list. Example if attribute name is “First Name” then make it “First_Name” and populate the list. Space is not supported while you try to parse and replace the token at run time with real value. Make a note that the Available Data and Selected Data are used in the substitution tokens definition only, they do not define the final data that will be sent in the notification. OIM will invoke the resolver class to get the data and make the substitutions. Under the Locale Information section, enter values in the following fields: To specify a form of encoding, select either UTF-8 or ASCII. In the Message Subject field, enter a subject for the notification. From the Type options, select the data type in which you want to send the message. You can choose between HTML and Text/Plain. In the Short Message field, enter a gist of the message in very few words. In the Long Message field, enter the message that will be sent as the notification with Available data token which need to be replaced by resolver at runtime. After you have entered the required values in all the fields, click Save. A message is displayed confirming the creation of the notification template. Click OK 4. Triggering the event A notification event can be triggered from different places in OIM. The logic behind the triggering must be coded and plugged into OIM. Examples of triggering points for notifications: Event handlers: post process notifications for specific data updates in OIM users Process tasks: to notify the users that a provisioning task was executed by OIM Scheduled tasks: to notify something related to the task The scheduled job has two parameters: Template Name: defines the notification template to be sent User Login: defines the user record that will provide the data to be sent in the notification Sample Code Snippet: public void execute(String templateName , String userId) { try { NotificationService notService = Platform.getService(NotificationService.class); NotificationEvent eventToSend=this.createNotificationEvent(templateName,userId); notService.notify(eventToSend); } catch (Exception e) { e.printStackTrace(); } } private NotificationEvent createNotificationEvent(String poTemplateName, String poUserId) { NotificationEvent event = new NotificationEvent(); String[] receiverUserIds= { poUserId }; event.setUserIds(receiverUserIds); event.setTemplateName(poTemplateName); event.setSender(null); HashMap<String, Object> templateParams = new HashMap<String, Object>(); templateParams.put("USER_LOGIN",poUserId); event.setParams(templateParams); return event; } public HashMap getAttributes() { return null; } public void setAttributes() {} }

    Read the article

  • Defining Your Online Segmentation and Targeting Strategy

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A lot of times, companies will put online segmentation and targeting on the back burner because they don’t know where to start. Often, I’ve heard web managers say that their segments aren’t well understood yet, so they can’t really deliver personalized online experiences that are meaningful. This lack of complete understanding means that they don't really bother to try. But, I don’t think you necessarily need to have an elaborate segmentation and targeting strategy already in place to start delivering a more relevant online customer experience. Sometimes it helps to think of how segmentation and targeting might solve some of the challenges your sites visitors are currently experiencing on your web presence, rather than doing nothing and waiting until a fully baked segmentation strategy lands in your inbox.  For example, perhaps you have a broad and varied service offering that makes it difficult for site visitors to easily find the solutions that are most relevant for them.  How can segmentation and targeting help solve this problem?  Or maybe it’s like the airline I described in Monday’s post where the special deals featured on the home page are only relevant to site visitors from a couple of cities.  Couldn’t segmentation and targeting help them to highlight offers on their home page that are relevant to a larger share of their site visitors? Your early segmentation and targeting efforts do not need to be complicated.  There are simple ways to start delivering a more relevant online customer experience, even if you’re dealing with anonymous site visitors.  These include targeting content to site visitors based on: Referral: Deliver targeted content to your site visitors that is based on where they came from or the search term they used to find your site Behavior:  Deliver content to your site visitors that is related or similar to content they’ve clicked on already Location:  Deliver content your site visitors that is most relevant for their geographic location (this would solve that pesky airline home page problem described above) So as you can see, there really are some very simple ways in which you can start improving your online customer experience using very basic segmentation and targeting methods.  One thing to keep in mind as you start to define you segmentation and targeting strategy is that there are many different types of attributes or combinations of attributes upon which you can base your segmentation and targeting strategy.  In addition to referral, behavior and location, other attributes that you should consider are: Profile Information:  What profile information do you know about this customer already?  Perhaps they provided some information on their interests and preferences when they first registered with your site. Time:  What time is it and how does that impact what my site visitors are looking for or trying to do? Demographics: What are my site visitors’ ages, incomes or ethnicities? Which attributes you select to include in your segmentation strategy will depend on your unique business needs and objectives.  Attributes such as behavior or referral may not be the most important targeting criteria depending on your situation. For example, if you’re a newspaper you might know that certain visitors are sports fans based on their profile information.  You can create a segment for sports fans and target sports related content to that segment of your readership online.  Or perhaps, a reader is browsing stories that are related to politics; you can use that visitor’s behavior to assign him or her to a segment for those interested in politics. From there you can recommend more stories to that visitor based on their interest in politics. For an airline, the visitor’s location may be a more important attribute. By detecting the visitor’s location, you can assign them to an appropriate segment and then target special flights and offers to them based on their likely departure airport. As you can see, there are many practical ways that you can start improving the experience your customers receive on your web presence using fairly basic segmentation and targeting techniques. If you want to learn more about segmentation and targeting using Oracle’s web experience management solution, check out this helpful video that demonstrates these powerful capabilities in Oracle WebCenter Sites. ***** On Demand Webcast Featuring Brian Solis of Altimeter Group Trends such as the mobile web, social media, gamification and real-time are changing customer behavior and expectations. In this new environment, many businesses will struggle. Some will fall by the wayside, while others learn to adapt and thrive. Watch this on demand webcast with Altimeter Group digital analyst and author, Brian Solis, and discover what your organization needs to know about how to compete in the new era of Digital Darwinism. View now.

    Read the article

  • jqgrid setting cutom formatter to dynamic column collection

    - by user312249
    I am using jqgrid. We are building a dashboard functionality with jquery. Different application just have to register respective application page and dashboard will render that page.To achieve this we are using jqgrid as one of the jquery plugin. Following is my codeenter code here var ph = '#' + placeHolder; var _prevSort; $.ajax({ url: dataUrl, dataType: "json", async: true, success: function(json) { pager = $('#' + pager); if (json.showPager === "false") { pager = eval(json.showPager); } dataUrl += "&jqSession=true"; $(ph).jqGrid({ url: dataUrl, datatype: "json", sortclass: "grid_sort", colNames: JSON.parse(json.colNames), colModel: JSON.parse(json.colModel), forceFit: true, rowNum: json.rowNum, rowList: JSON.parse(json.rowList), pager: pager, sortname: json.sortName, caption: json.caption, viewrecords: true, viewsortcols: true, sortorder: json.sortOrder, footerrow: summaryFooter, userDataOnFooter: summaryFooter, jsonReader: { root: "rows", row: "row", repeatitems: false, id: json.sortName }, gridComplete: function() { if (showFooter) { $(ph).append("" + json.footerRow + ""); } if (json.additionalContent != null) { $("#" + xContID).html(json.additionalContent); } $("ui-icon-asc").append("IMG"); var _rows = $(".jqgrow"); if (json.rows.length 0) { for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } var gMaxHeight = getGridMaxHeight(); var gHeight = ($(ph + " tr").length + 1) * ($($(".jqgrow") [0]).height()); if (gHeight <= gMaxHeight) { $(ph).parent().height(gHeight); } else { $(ph).parent().height(gMaxHeight); } } else { $(ph).prepend("" + gridNoDataMsg + ""); $(ph).parent().height(60); } }, onSortCol: function(index, iCol, sortorder) { dataUrl = dataUrl.replace("&jqSession=true", ""); $(ph).jqGrid().setGridParam({ url: dataUrl }).trigger("reloadGrid"); var colName = "#jqgh" + index; // $(_prevSort).parent().removeClass("ui-jqgrid-sorted"); // $(_prevSort).parent().addClass("ui-state-default"); // $(_colName).parent().addClass("ui-jqgrid-sorted"); // $(_colName).parent().removeClass("ui-state-default"); _prevSort = _colName; var _rows = $(".jqgrow"); for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } } }).navGrid('#' + pager, { search: false, sort: false, edit: false, add: false, del: false, refresh: false }); // end of grid $("#" + loadid).empty(); gGridIds[gGridIds.length] = placeHolder; SetGridSizes(); }, error: function() { $("#" + loadid).html(loadingErr); } }); As you can see from the code i am getting column collection dynamically(Appication page which i am calling will give me JSON in the response and will have colNames collection in it. Evrything is working fine but, only issue is coming when we are trying to apply custom formatter to column. This issue comes only when we are dynamically assign "colModel" to jqgrid. Appreciate help Thanks in advance

    Read the article

  • What we have to measure for measuring server performance If we can't measure the server processing time from client side?

    - by AsadYarKhan
    If we can not measure the server processing time from client side then which attributes will be good to measure in client side for measuring server side performance and What attributes are important ? I know we can get the server response time, latency and Throughput etc,but how do we understand/interpret the result of server side from these attrubutes. How can we analyse that whether my code is taking lots of time,whether Web Server, whether it is because of Server Machine(H/W).how would i know that which thing needs to be upgrade or improve.Please tell me any article or any book something that I need to study or explain here If you can so I can interpret the result of server side using these attributes response time, latency and throughput.You can tell other performance attribute if I need to understand the server result.

    Read the article

  • undefined method `new_record?' for nil:NilClass

    - by TopperH
    In rails 3.2 I created a post controller. Each post can have a different number of paperclip attachments. To achieve this I created a assets model where each asset has a paperclip attachment. One post has_many assets and assets belong_to post. Asset model class Asset < ActiveRecord::Base belongs_to :post has_attached_file :photo, :styles => { :thumb => "200x200>" } end Post model class Post < ActiveRecord::Base attr_accessible :content, :title has_many :assets, :dependent => :destroy validates_associated :assets after_update :save_assets def new_asset_attributes=(asset_attributes) asset_attributes.each do |attributes| assets.build(attributes) end end def existing_asset_attributes=(asset_attributes) assets.reject(&:new_record?).each do |asset| attributes = asset_attributes[asset.id.to_s] if attributes asset.attributes = attributes else asset.delete(asset) end end end def save_assets assets.each do |asset| asset.save(false) end end end Posts helper module PostsHelper def add_asset_link(name) link_to_function name do |post| post.insert_html :bottom, :assets, :partial => 'asset', :object => Asset.new end end end Form for post <%= form_for @post, :html => { :multipart => true } do |f| %> <% if @post.errors.any? %> <div id="error_explanation"> <h2><%= pluralize(@post.errors.count, "error") %> prohibited this post from being saved:</h2> <ul> <% @post.errors.full_messages.each do |msg| %> <li><%= msg %></li> <% end %> </ul> </div> <% end %> <div class="field"> <%= f.label :title %><br /> <%= f.text_field :title %> </div> <div class="field"> <%= f.label :content %><br /> <%= f.text_area :content %> </div> <div id="assets"> Attach a file or image<br /> <%= render 'asset', :collection => @post.assets %> </div> <div class="actions"> <%= f.submit %> </div> <% end %> Asset partial <div class="asset"> <% new_or_existing = asset.new_record? ? 'new' : 'existing' %> <% prefix = "post[#{new_or_existing}_asset_attributes][]" %> <% fields_for prefix, asset do |asset_form| -%> <p> Asset: <%= asset_form.file_field :photo %> <%= link_to_function "remove", "$(this).up('.asset').remove()" %> </p> <% end -%> </div> Most of the code is taken from here: https://gist.github.com/33011 and I understand this is a rails2 app, anyway I don't understand what this error means: undefined method `new_record?' for nil:NilClass Extracted source (around line #2): 1: <div class="asset"> 2: <% new_or_existing = asset.new_record? ? 'new' : 'existing' %> 3: <% prefix = "post[#{new_or_existing}_asset_attributes][]" %> 4: 5: <% fields_for prefix, asset do |asset_form| -%>

    Read the article

  • ASP.NET Connections Spring 2012 Talks and Code

    - by Stephen.Walther
    Thank you everyone who attended my ASP.NET Connections talks last week in Las Vegas. I’ve attached the slides and code for the three talks that I delivered:   Using jQuery to interact with the Server through Ajax – In this talk, I discuss the different ways to communicate information between browser and server using Ajax. I explain the difference between the different types of Ajax calls that you can make with jQuery. I also discuss the differences between the JavaScriptSerializer, the DataContractJsonSerializer, and the JSON.NET serializer.   ASP.NET Validation In-Depth – In this talk, I distinguish between View Model Validation and Domain Model Validation. I demonstrate how you can use the validation attributes (including the new .NET 4.5 validation attributes), the jQuery Validation library, and the HTML5 input validation attributes to perform View Model Validation. I then demonstrate how you can use the IValidatableObject interface with the Entity Framework to perform Domain Model Validation.   Using the MVVM Pattern with JavaScript Views – In this talk, I discuss how you can create single page applications (SPA) by taking advantage of the open-source KnockoutJS library and the ASP.NET Web API.   Be warned that the sample code is contained in Visual Studio 11 Beta projects. If you don’t have this version of Visual Studio, then you will need to open the code samples in Notepad. Also, I apologize for getting the code for these talks posted so slowly. I’ve been down with a nasty case of the flu for the past week and haven’t been able to get to a computer.

    Read the article

  • ASP.NET Connections Spring 2012 Talks and Code

    - by Stephen.Walther
    Thank you everyone who attended my ASP.NET Connections talks last week in Las Vegas. I’ve attached the slides and code for the three talks that I delivered: Using jQuery to interact with the Server through Ajax– In this talk, I discuss the different ways to communicate information between browser and server using Ajax. I explain the difference between the different types of Ajax calls that you can make with jQuery. I also discuss the differences between the JavaScriptSerializer, the DataContractJsonSerializer, and the JSON.NET serializer. ASP.NET Validation In-Depth– In this talk, I distinguish between View Model Validation and Domain Model Validation. I demonstrate how you can use the validation attributes (including the new .NET 4.5 validation attributes), the jQuery Validation library, and the HTML5 input validation attributes to perform View Model Validation. I then demonstrate how you can use the IValidatableObject interface with the Entity Framework to perform Domain Model Validation. Using the MVVM Pattern with JavaScript Views – In this talk, I discuss how you can create single page applications (SPA) by taking advantage of the open-source KnockoutJS library and the ASP.NET Web API. Be warned that the sample code is contained in Visual Studio 11 Beta projects. If you don’t have this version of Visual Studio, then you will need to open the code samples in Notepad. Also, I apologize for getting the code for these talks posted so slowly. I’ve been down with a nasty case of the flu for the past week and haven’t been able to get to a computer.

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >