Search Results

Search found 4118 results on 165 pages for 'attributes'.

Page 32/165 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • What's the use of Ant's extension-point if/unless attributes?

    - by Robert Menteer
    When you define an extension-point in an Ant build file you can have it conditional by using the if or unless attribute. On a target the if/unless prevent it's tasks from being run. But an extension-point doesn't have any tasks to conditionally run, so what does the condition do? My thought (which proved to be incorrect in Ant 1.8.0) is it would prevent any tasks that extend the extension-point from being run. Here is an example build script showing the problem: <project name = "ext-test" default = "main"> <property name = "do.it" value = "false" /> <extension-point name = "init"/> <extension-point name = "doit" depends = "init" if = "${do.it}" /> <target name = "extend-init" extensionOf = "init"> <echo message = "Doing extend-init." /> </target> <target name = "extend-doit" extensionOf = "doit"> <echo message = "Do It! (${do.it})" /> </target> <target name = "main" depends = "doit"> <echo message = "Doing main." /> </target> </project> Using the command: ant -v Relults in: Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: /Users/bob/build.xml Detected Java version: 1.6 in: /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Detected OS: Mac OS X parsing buildfile /Users/bob/build.xml with URI = file:/Users/bob/build.xml Project base dir set to: /Users/bob parsing buildfile jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Build sequence for target(s) `main' is [extend-init, init, extend-doit, doit, main] Complete build sequence is [extend-init, init, extend-doit, doit, main, ] extend-init: [echo] Doing extend-init. init: extend-doit: [echo] Do It! (false) doit: Skipped because property 'false' not set. main: [echo] Doing main. BUILD SUCCESSFUL Total time: 0 seconds You will notice the target extend-doit is executed but the extention-point itself is skipped. Since an extention-point doesn't have any tasks exactly what has been skipped? Any targets that depend on the extention-point still get executed since a skipped target is a successful target. What is the value of the if/unless attributes on an extention-point?

    Read the article

  • Best practice: How to use (repeat) CSS style attributes correctly?

    - by ellie
    Hi guys! As a CSS newbie I'm wondering if it's recommended by professionals to repeat specific style attributes and their not inherited but default properties for every relevant selector? For example, should I rather use body {background:transparent none no-repeat; border:0 none transparent; margin:0; padding:0;} img {background:transparent none no-repeat; border:0 none transparent; margin:0; outline:transparent none 0; padding:0;} div#someID {background:transparent none no-repeat; border:0 none; margin:0 auto; padding:0; text-align:left; width:720px; ...} or body {background:transparent; border:0; margin:0; padding:0;} img {background:transparent; border:0; margin:0; outline:0; padding:0;} div#someID {background:transparent; border:0; margin:0 auto; padding:0; text-align:left; width:720px; ...} or just what (I think) I really need body {background:transparent; margin:0; padding:0;} img {border:0; outline:0;} div#someID {margin:0 auto; width:720px; ...} If it's best practice to go with the first or second one what do you think about defining a class like .foo {background:transparent; border:0; margin:0; padding:0;} and then applying it to every relevant selector: <div id="someID" class="foo">...</div> Yep, now I'm totally confused... so please advise! Thanks!

    Read the article

  • How change Castor mapping to remove "xmlns:xsi" and "xsi:type" attributes from element in XML output

    - by Derek Mahar
    How do I change the Castor mapping <?xml version="1.0"?> <!DOCTYPE mapping PUBLIC "-//EXOLAB/Castor Mapping DTD Version 1.0//EN" "http://castor.org/mapping.dtd"> <mapping> <class name="java.util.ArrayList" auto-complete="true"> <map-to xml="ArrayList" /> </class> <class name="com.db.spgit.abstrack.ws.response.UserResponse"> <map-to xml="UserResponse" /> <field name="id" type="java.lang.String"> <bind-xml name="id" node="element" /> </field> <field name="deleted" type="boolean"> <bind-xml name="deleted" node="element" /> </field> <field name="name" type="java.lang.String"> <bind-xml name="name" node="element" /> </field> <field name="typeId" type="java.lang.Integer"> <bind-xml name="typeId" node="element" /> </field> <field name="regionId" type="java.lang.Integer"> <bind-xml name="regionId" node="element" /> </field> <field name="regionName" type="java.lang.String"> <bind-xml name="regionName" node="element" /> </field> </class> </mapping> to suppress the xmlns:xsi and xsi:type attributes in the element of the XML output? For example, instead of the output XML <?xml version="1.0" encoding="UTF-8"?> <ArrayList> <UserResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="UserResponse"> <name>Tester</name> <typeId>1</typeId> <regionId>2</regionId> <regionName>US</regionName> </UserResponse> </ArrayList> I'd prefer <?xml version="1.0" encoding="UTF-8"?> <ArrayList> <UserResponse> <name>Tester</name> <typeId>1</typeId> <regionId>2</regionId> <regionName>US</regionName> </UserResponse> </ArrayList> such that the element name implies the xsi:type.

    Read the article

  • How to discover classes with [Authorize] attributes using Reflection in C#? (or How to build Dynamic

    - by Pretzel
    Maybe I should back-up and widen the scope before diving into the title question... I'm currently writing a web app in ASP.NET MVC 1.0 (although I do have MVC 2.0 installed on my PC, so I'm not exactly restricted to 1.0) -- I've started with the standard MVC project which has your basic "Welcome to ASP.NET MVC" and shows both the [Home] tab and [About] tab in the upper-right corner. Pretty standard, right? I've added 4 new Controller classes, let's call them "Astronomer", "Biologist", "Chemist", and "Physicist". Attached to each new controller class is the [Authorize] attribute. For example, for the BiologistController.cs [Authorize(Roles = "Biologist,Admin")] public class BiologistController : Controller { public ActionResult Index() { return View(); } } These [Authorize] tags naturally limit which user can access different controllers depending on Roles, but I want to dynamically build a Menu at the top of my website in the Site.Master Page based on the Roles the user is a part of. So for example, if JoeUser was a member of Roles "Astronomer" and "Physicist", the navigation menu would say: [Home] [Astronomer] [Physicist] [About] And naturally, it would not list links to "Biologist" or "Chemist" controller Index page. Or if "JohnAdmin" was a member of Role "Admin", links to all 4 controllers would show up in the navigation bar. Ok, you prolly get the idea... Starting with the answer from this StackOverflow topic about Dynamic Menu building in ASP.NET, I'm trying to understand how I would fully implement this. (I'm a newbie and need a little more guidance, so please bare with me.) The answer proposes Extending the Controller class (call it "ExtController") and then have each new WhateverController inherit from ExtController. My conclusion is that I would need to use Reflection in this ExtController Constructor to determine which Classes and Methods have [Authorize] attributes attached to them to determine the Roles. Then using a Static Dictionary, store the Roles and Controllers/Methods in key-value pairs. I imagine it something like this: public class ExtController : Controller { protected static Dictionary<Type,List<string>> ControllerRolesDictionary; protected override void OnActionExecuted(ActionExecutedContext filterContext) { // build list of menu items based on user's permissions, and add it to ViewData IEnumerable<MenuItem> menu = BuildMenu(); ViewData["Menu"] = menu; } private IEnumerable<MenuItem> BuildMenu() { // Code to build a menu SomeRoleProvider rp = new SomeRoleProvider(); foreach (var role in rp.GetRolesForUser(HttpContext.User.Identity.Name)) { } } public ExtController() { // Use this.GetType() to determine if this Controller is already in the Dictionary if (!ControllerRolesDictionary.ContainsKey(this.GetType())) { // If not, use Reflection to add List of Roles to Dictionary // associating with Controller } } } Is this doable? If so, how do I perform Reflection in the ExtController constructor to discover the [Authorize] attribute and related Roles (if any) ALSO! Feel free to go out-of-scope on this question and suggest an alternate way of solving this "Dynamic Site.Master Menu based on Roles" problem. I'm the first to admit that this may not be the best approach.

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • Why is the this-pointer needed to access inherited attributes?

    - by Shadow
    Hi, assume the following class is given: class Base{ public: Base() {} Base( const Base& b) : base_attr(b.base_attr) {} void someBaseFunction() { .... } protected: SomeType base_attr; }; When I want a class to inherit from this one and include a new attribute for the derived class, I would write: class Derived: public Base { public: Derived() {} Derived( const Derived& d ) : derived_attr(d.derived_attr) { this->base_attr = d.base_attr; } void SomeDerivedFunction() { .... } private: SomeOtherType derived_attr; }; This works for me (let's ignore eventually missing semicolons or such please). However, when I remove the "this-" in the copy constructor of the derived class, the compiler complains that "'base_attr' was not declared in this scope". I thought that, when inheriting from a class, the protected attributes would then also be accessible directly. I did not know that the "this-" pointer was needed. I am now confused if it is actually correct what I am doing there, especially the copy-constructor of the Derived-class. Because each Derived object is supposed to have a base_attr and a derived_attr and they obviously need to be initialized/set correctly. And because Derived is inheriting from Base, I don't want to explicitly include an attribute named "base_attr" in the Derived-class. IMHO doing so would generally destroy the idea behind inheritance, as everything would have to be defined again. EDIT Thank you all for the quick answers. I completely forgot the fact that the classes actually are templates. Please, see the new examples below, which are actually compiling when including "this-" and are failing when omiting "this-" in the copy-constructor of the Derived-class: Base-class: #include <iostream> template<class T> class Base{ public: Base() : base_attr(0) {} Base( const Base& b) : base_attr(b.base_attr) {} void baseIncrement() { ++base_attr; } void printAttr() { std::cout << "Base Attribute: " << base_attr << std::endl; } protected: T base_attr; }; Derived-class: #include "base.hpp" template< class T > class Derived: public Base<T>{ public: Derived() : derived_attr(1) {} Derived( const Derived& d) : derived_attr(d.derived_attr) { this->base_attr = d.base_attr; } void derivedIncrement() { ++derived_attr; } protected: T derived_attr; }; and for completeness also the main function: #include "derived.hpp" int main() { Derived<int> d; d.printAttr(); d.baseIncrement(); d.printAttr(); Derived<int> d2(d); d2.printAttr(); return 0; }; I am using g++-4.3.4. Although I understood now that it seems to come from the fact that I use template-class definitions, I did not quite understand what is causing the problem when using templates and why it works when not using templates. Could someone please further clarify this?

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Adding Unobtrusive Validation To MVCContrib Fluent Html

    - by srkirkland
    ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules. A Quick Introduction to Unobtrusive Validation To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]: public class Order : DomainObject { [DataType(DataType.Date)] public virtual DateTime OrderDate { get; set; }   [Required] [StringLength(12)] public virtual string ShipAddress { get; set; }   [Required] public virtual Customer OrderedBy { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property: @Html.LabelFor(x => x.Order.ShipAddress) @Html.EditorFor(x => x.Order.ShipAddress) @Html.ValidationMessageFor(x => x.Order.ShipAddress) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results. If we view source on the input box generated by the Html.EditorFor() call, we get the following: <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="text-box single-line input-validation-error"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”). And of course, if we try to submit the form with an invalid value, we get an error on the client: Working with MvcContrib’s Fluent Html The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples: @this.TextBox(x => x.FirstName).Class("required").Label("First Name:") @this.MultiSelect(x => x.UserId).Options(ViewModel.Users) @this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")   @(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName) .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "") .FirstOption(null, "--Select A Company--") .HideFirstOptionWhen(Model.Order.OrderedBy != null) .Label("Ordered By:")) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!). Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered. Extending MvcContrib’s Fluent Html Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html: public static class FluentHtmlExtensions { public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) where T : MvcContrib.FluentHtml.Elements.IElement { IDictionary<string, object> validationAttributes = htmlHelper .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));   foreach (var validationAttribute in validationAttributes) { element.SetAttr(validationAttribute.Key, validationAttribute.Value); }   return element; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered. The Extension In Action Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case): @this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" class="class-name"> Now let’s do the same thing except here we’ll use the newly written extension method: @this.TextBox("Order.ShipAddress").Label("Ship Address:") .Class("class-name").IncludeUnobtrusiveValidationAttributes(Html) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="class-name"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation. Wrap Up Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do. Enjoy!

    Read the article

  • Introducing Data Annotations Extensions

    - by srkirkland
    Validation of user input is integral to building a modern web application, and ASP.NET MVC offers us a way to enforce business rules on both the client and server using Model Validation.  The recent release of ASP.NET MVC 3 has improved these offerings on the client side by introducing an unobtrusive validation library built on top of jquery.validation.  Out of the box MVC comes with support for Data Annotations (that is, System.ComponentModel.DataAnnotations) and can be extended to support other frameworks.  Data Annotations Validation is becoming more popular and is being baked in to many other Microsoft offerings, including Entity Framework, though with MVC it only contains four validators: Range, Required, StringLength and Regular Expression.  The Data Annotations Extensions project attempts to augment these validators with additional attributes while maintaining the clean integration Data Annotations provides. A Quick Word About Data Annotations Extensions The Data Annotations Extensions project can be found at http://dataannotationsextensions.org/, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4.  You can find a current list of the validation attributes on the afore mentioned website. The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required). On to the Preview Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project: public class Customer { public string Email { get; set; } public int Age { get; set; } public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself).  It should look something like this: With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number!  With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes.  However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string.  Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get: public class Customer { [Email] [Required] public string Email { get; set; }   [Integer] [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")] [Required] public int Age { get; set; }   [FileExtensions("png|jpg|jpeg|gif")] public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s try to put in some invalid values and see what happens: That is very nice validation, all done on the client side (will also be validated on the server).  Also, the Customer class validation attributes are very easy to read and understand. Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required! Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project. Adding Data Annotations Extensions To Your Project First we will File->New Project and create an ASP.NET MVC 3 project.  I am going to use Razor for these examples, but any view engine can be used in practice.  Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.”  You should see the following two packages: The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install.  This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3). Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation.  If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically.  However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with. Creating Your Model In the Models folder, create a new User.cs file with a User class that you can use as a model.  To start with, I’ll use the following class: public class User { public string Email { get; set; } public string Password { get; set; } public string PasswordConfirm { get; set; } public string HomePage { get; set; } public int Age { get; set; } } Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling).  Your files will look something like this: UserController.cs: public class UserController : Controller { public ActionResult Create() { return View(new User()); }   [HttpPost] public ActionResult Create(User user) { if (!ModelState.IsValid) { return View(user); }   return Content("User valid!"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Create.cshtml: @model NuGetValidationTester.Models.User   @{ ViewBag.Title = "Create"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>User</legend> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page).  These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions.  These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type. Now when we go to /User/Create we should see a form for editing a User Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0).  With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions.  The new and improved User.cs model class. { [Required] [Email] public string Email { get; set; }   [Required] public string Password { get; set; }   [Required] [EqualTo("Password")] public string PasswordConfirm { get; set; }   [Url] public string HomePage { get; set; }   [Integer] [Min(1)] public int Age { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s re-run our form and try to use some invalid values: All of the validation errors you see above occurred on the client, without ever even hitting submit.  The validation is also checked on the server, which is a good practice since client validation is easily bypassed. That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily. Nitpickers Corner ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo.  Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project.  Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3. Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages. Available Validators: Please see the project site at http://dataannotationsextensions.org/ for an up-to-date list of the new validators included in this project.  As of this post, the following validators are available: CreditCard Date Digits Email EqualTo FileExtensions Integer Max Min Numeric Url Conclusion Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs. The Data Annotations Extensions project is fully open source under the BSD license.  Any feedback would be greatly appreciated.  More information than you require, along with links to the source code, is available at http://dataannotationsextensions.org/. Enjoy!

    Read the article

  • Metro: Declarative Data Binding

    - by Stephen.Walther
    The goal of this blog post is to describe how declarative data binding works in the WinJS library. In particular, you learn how to use both the data-win-bind and data-win-bindsource attributes. You also learn how to use calculated properties and converters to format the value of a property automatically when performing data binding. By taking advantage of WinJS data binding, you can use the Model-View-ViewModel (MVVM) pattern when building Metro style applications with JavaScript. By using the MVVM pattern, you can prevent your JavaScript code from spinning into chaos. The MVVM pattern provides you with a standard pattern for organizing your JavaScript code which results in a more maintainable application. Using Declarative Bindings You can use the data-win-bind attribute with any HTML element in a page. The data-win-bind attribute enables you to bind (associate) an attribute of an HTML element to the value of a property. Imagine, for example, that you want to create a product details page. You want to show a product object in a page. In that case, you can create the following HTML page to display the product details: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Product Details</h1> <div class="field"> Product Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Product Price: <span data-win-bind="innerText:price"></span> </div> <div class="field"> Product Picture: <br /> <img data-win-bind="src:photo;alt:name" /> </div> </body> </html> The HTML page above contains three data-win-bind attributes – one attribute for each product property displayed. You use the data-win-bind attribute to set properties of the HTML element associated with the data-win-attribute. The data-win-bind attribute takes a semicolon delimited list of element property names and data source property names: data-win-bind=”elementPropertyName:datasourcePropertyName; elementPropertyName:datasourcePropertyName;…” In the HTML page above, the first two data-win-bind attributes are used to set the values of the innerText property of the SPAN elements. The last data-win-bind attribute is used to set the values of the IMG element’s src and alt attributes. By the way, using data-win-bind attributes is perfectly valid HTML5. The HTML5 standard enables you to add custom attributes to an HTML document just as long as the custom attributes start with the prefix data-. So you can add custom attributes to an HTML5 document with names like data-stephen, data-funky, or data-rover-dog-is-hungry and your document will validate. The product object displayed in the page above with the data-win-bind attributes is created in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000, photo: "/images/TeslaPhoto.png" }; WinJS.Binding.processAll(null, product); } }; app.start(); })(); In the code above, a product object is created with a name, price, and photo property. The WinJS.Binding.processAll() method is called to perform the actual binding (Don’t confuse WinJS.Binding.processAll() and WinJS.UI.processAll() – these are different methods). The first parameter passed to the processAll() method represents the root element for the binding. In other words, binding happens on this element and its child elements. If you provide the value null, then binding happens on the entire body of the document (document.body). The second parameter represents the data context. This is the object that has the properties which are displayed with the data-win-bind attributes. In the code above, the product object is passed as the data context parameter. Another word for data context is view model.  Creating Complex View Models In the previous section, we used the data-win-bind attribute to display the properties of a simple object: a single product. However, you can use binding with more complex view models including view models which represent multiple objects. For example, the view model in the following default.js file represents both a customer and a product object. Furthermore, the customer object has a nested address object: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone", address: { street: "1 Rocky Way", city: "Bedrock", country: "USA" } }, product: { name: "Bowling Ball", price: 34.55 } }; WinJS.Binding.processAll(null, viewModel); } }; app.start(); })(); The following page displays the customer (including the customer address) and the product. Notice that you can use dot notation to refer to child objects in a view model such as customer.address.street. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:customer.firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:customer.lastName"></span> </div> <div class="field"> Address: <address> <span data-win-bind="innerText:customer.address.street"></span> <br /> <span data-win-bind="innerText:customer.address.city"></span> <br /> <span data-win-bind="innerText:customer.address.country"></span> </address> </div> <h1>Product</h1> <div class="field"> Name: <span data-win-bind="innerText:product.name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:product.price"></span> </div> </body> </html> A view model can be as complicated as you need and you can bind the view model to a view (an HTML document) by using declarative bindings. Creating Calculated Properties You might want to modify a property before displaying the property. For example, you might want to format the product price property before displaying the property. You don’t want to display the raw product price “80000”. Instead, you want to display the formatted price “$80,000”. You also might need to combine multiple properties. For example, you might need to display the customer full name by combining the values of the customer first and last name properties. In these situations, it is tempting to call a function when performing binding. For example, you could create a function named fullName() which concatenates the customer first and last name. Unfortunately, the WinJS library does not support the following syntax: <span data-win-bind=”innerText:fullName()”></span> Instead, in these situations, you should create a new property in your view model that has a getter. For example, the customer object in the following default.js file includes a property named fullName which combines the values of the firstName and lastName properties: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", get fullName() { return this.firstName + " " + this.lastName; } }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); The customer object has a firstName, lastName, and fullName property. Notice that the fullName property is defined with a getter function. When you read the fullName property, the values of the firstName and lastName properties are concatenated and returned. The following HTML page displays the fullName property in an H1 element. You can use the fullName property in a data-win-bind attribute in exactly the same way as any other property. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1 data-win-bind="innerText:fullName"></h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </body> </html> Creating a Converter In the previous section, you learned how to format the value of a property by creating a property with a getter. This approach makes sense when the formatting logic is specific to a particular view model. If, on the other hand, you need to perform the same type of formatting for multiple view models then it makes more sense to create a converter function. A converter function is a function which you can apply whenever you are using the data-win-bind attribute. Imagine, for example, that you want to create a general function for displaying dates. You always want to display dates using a short format such as 12/25/1988. The following JavaScript file – named converters.js – contains a shortDate() converter: (function (WinJS) { var shortDate = WinJS.Binding.converter(function (date) { return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear(); }); // Export shortDate WinJS.Namespace.define("MyApp.Converters", { shortDate: shortDate }); })(WinJS); The file above uses the Module Pattern, a pattern which is used through the WinJS library. To learn more about the Module Pattern, see my blog entry on namespaces and modules: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-namespaces-and-modules.aspx The file contains the definition for a converter function named shortDate(). This function converts a JavaScript date object into a short date string such as 12/1/1988. The converter function is created with the help of the WinJS.Binding.converter() method. This method takes a normal function and converts it into a converter function. Finally, the shortDate() converter is added to the MyApp.Converters namespace. You can call the shortDate() function by calling MyApp.Converters.shortDate(). The default.js file contains the customer object that we want to bind. Notice that the customer object has a firstName, lastName, and birthday property. We will use our new shortDate() converter when displaying the customer birthday property: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", birthday: new Date("12/1/1988") }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); We actually use our shortDate converter in the HTML document. The following HTML document displays all of the customer properties: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/converters.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> <div class="field"> Birthday: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> </div> </body> </html> Notice the data-win-bind attribute used to display the birthday property. It looks like this: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> The shortDate converter is applied to the birthday property when the birthday property is bound to the SPAN element’s innerText property. Using data-win-bindsource Normally, you pass the view model (the data context) which you want to use with the data-win-bind attributes in a page by passing the view model to the WinJS.Binding.processAll() method like this: WinJS.Binding.processAll(null, viewModel); As an alternative, you can specify the view model declaratively in your markup by using the data-win-datasource attribute. For example, the following default.js script exposes a view model with the fully-qualified name of MyWinWebApp.viewModel: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Create view model var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone" }, product: { name: "Bowling Ball", price: 12.99 } }; // Export view model to be seen by universe WinJS.Namespace.define("MyWinWebApp", { viewModel: viewModel }); // Process data-win-bind attributes WinJS.Binding.processAll(); } }; app.start(); })(); In the code above, a view model which represents a customer and a product is exposed as MyWinWebApp.viewModel. The following HTML page illustrates how you can use the data-win-bindsource attribute to bind to this view model: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div data-win-bindsource="MyWinWebApp.viewModel.customer"> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </div> <h1>Product</h1> <div data-win-bindsource="MyWinWebApp.viewModel.product"> <div class="field"> Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> The data-win-bindsource attribute is used twice in the page above: it is used with the DIV element which contains the customer details and it is used with the DIV element which contains the product details. If an element has a data-win-bindsource attribute then all of the child elements of that element are affected. The data-win-bind attributes of all of the child elements are bound to the data source represented by the data-win-bindsource attribute. Summary The focus of this blog entry was data binding using the WinJS library. You learned how to use the data-win-bind attribute to bind the properties of an HTML element to a view model. We also discussed several advanced features of data binding. We examined how to create calculated properties by including a property with a getter in your view model. We also discussed how you can create a converter function to format the value of a view model property when binding the property. Finally, you learned how to use the data-win-bindsource attribute to specify a view model declaratively.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • How do I target a div without a class or id but with all these specific style-attributes

    - by Ben
    I'm restyling a certain page where I don't have access to all the source-HTML. So I'm left over with some hard to target elements within a page I need removed. For example the page is swamped with <div style="float: left; width: 10.25px; height: 284px;"></div>. To get rid of them I tried this in CSS: div[style="float: left; width: 10.25px; height: 284px;"] { display:none !important; } I'd like to fix this using CSS, because the the HTML that needs to be removed is updated using ajax. How do I target a div without a class or id but with all these specific style-attributes? Some more of the source-code (took a while to organize the parsed source): <div> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760187/Le-Coq-Sportif-Angers-Low.html.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036303.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Angers-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760187/Le-Coq-Sportif-Angers-Low.html.html"> Le Coq Sportif Angers Low </a> </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 89,95 </span> </span> <div style="float: left; width: 10.25px; height: 284px; border-right: 1px dashed #A0A0A0;" class="BigPhotoList_Stippel"></div> <div style="float: left; width: 10.25px; height: 284px;"></div> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760185/Le-Coq-Sportif-Auveurne-Low.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036301.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Auveurne-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760185/Le-Coq-Sportif-Auveurne-Low.html"> Le Coq Sportif Auveurne Low </a> </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 79,95 </span> </span> <div style="float: left; width: 10.25px; height: 284px; border-right: 1px dashed #A0A0A0;" class="BigPhotoList_Stippel"></div> <div style="float: left; width: 10.25px; height: 284px;"></div> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760191/Le-Coq-Sportif-Bordeaux-Low.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036307.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Bordeaux-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760191/Le-Coq-Sportif-Bordeaux-Low.html"> Le Coq Sportif Bordeaux Low </a> </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 99,95 </span> </span> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760181/Le-Coq-Sportif-Cannet-Low.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036297.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Cannet-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760181/Le-Coq-Sportif-Cannet-Low.html"> Le Coq Sportif Cannet Low </a> </span> <span class="Discount BigPhotoList_Discount" style="font-size: 120%;"> € 99,95 </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 94,95 </span> </span> <div style="float: left; width: 10.25px; height: 284px; border-right: 1px dashed #A0A0A0;" class="BigPhotoList_Stippel"></div> <div style="float: left; width: 10.25px; height: 284px;"></div> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760183/Le-Coq-Sportif-Rodez-Low.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036299.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Rodez-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760183/Le-Coq-Sportif-Rodez-Low.html"> Le Coq Sportif Rodez Low </a> </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 99,95 </span> </span> <div style="float: left; width: 10.25px; height: 284px; border-right: 1px dashed #A0A0A0;" class="BigPhotoList_Stippel"></div> <div style="float: left; width: 10.25px; height: 284px;"></div> <span class="Block BigPhotoList_Block"> <span class="Photo BigPhotoList_Photo" style="height: 200px"> <a href="/Webwinkel-Product-83760189/Le-Coq-Sportif-Sedan-Low.html"> <span style="background-image:url(http://61955.static.securearea.eu/Files/2/61000/61955/ProductPhotos/MaxContent/144036305.jpg);" class="Canvas BigPhotoList_Canvas" title="Le-Coq-Sportif-Sedan-Low"></span> </a> </span> <span class="Title BigPhotoList_Title"> <a href="/Webwinkel-Product-83760189/Le-Coq-Sportif-Sedan-Low.html"> Le Coq Sportif Sedan Low </a> </span> <span class="Discount BigPhotoList_Discount" style="font-size: 120%;"> € 109,95 </span> <span class="Price BigPhotoList_Price" style="font-size: 120%;"> € 99,95 </span> </span> </div>

    Read the article

  • Compare 2 sets of data in Excel and returning a value when multiple columns match

    - by Susan C
    I have a data set for employees that contains name and 3 attributes (job function, job grade and location). I then have a data set for open positions that contains the requisition number and 3 attributes (job function, job grade and job location). For every employee, i would like the three attributes associated with them compared to the same three attributes of the open positions and have the cooresponding requisition numbers displayed for each employee where there is a match.

    Read the article

  • Rails and MongoDB with MongoMapper

    - by FCastellanos
    I'm new to Rails development and I'm starting with MongoDB also. I have been following this Railscast tutorial about complex forms with Rails but I'm using MongoDB as my database. I'm having no problems inserting documents with it's childs and retrieving the data to the edit form, but when I try to update it I get this error undefined method `assert_valid_keys' for false:FalseClass this is my entity class class Project include MongoMapper::Document key :name, String, :required => true key :priority, Integer many :tasks after_update :save_tasks def task_attributes=(task_attributes) task_attributes.each do |attributes| if attributes[:id].blank? tasks.build(attributes) else task = tasks.detect { |t| t.id.to_s == attributes[:id].to_s } task.attributes = attributes end end end def save_tasks tasks.each do |t| t.save(false) end end end Does anyone knows whats happening here? Thanks

    Read the article

  • Javascript to add cdata section on the fly?

    - by Chris G.
    I'm having trouble with special characters that exist in an xml node attribute. To combat this, I'm trying to render the attributes as child nodes and, where necessary, using cdata sections to get around the special characters. The problem is, I can't seem to get the cdata section appended to the node correctly. I'm iterating over the source xml node's attributes and creating new nodes. If the attribute.name = "description" I want to put the attribute.text() in a cdata section and append the new node. That's where I jump the track. // newXMLData is the new xml document that I've created in memory for (var ctr =0;ctr< this.attributes.length;ctr++){ // iterate over the attributes if( this.attributes[ctr].name =="Description"){ // if the attribute name is "Description" add a CDATA section var thisNodeName = this.attributes[ctr].name; newXMLDataNode.append("<"+thisNodeName +"></"+ thisNodeName +">" ); var cdata = newXMLData.createCDATASection('test'); // here's where it breaks. } else { // It's not "Description" so just append the new node. newXMLDataNode.append("<"+ this.attributes[ctr].name +">" + $(this.attributes[ctr]).text() + "</"+ this.attributes[ctr].name +">" ); } } Any ideas? Is there another way to add a cdata section? Here's a sample snippet of the source... <row pSiteID="4" pSiteTile="Test Site Name " pSiteURL="http://www.cnn.com" ID="1" Description="<div>blah blah blah since June 2007.&amp;nbsp; T<br>&amp;nbsp;<br>blah blah blah blah&amp;nbsp; </div>" CreatedDate="2010-09-20 14:46:18" Comments="Comments example.&#10;" > here's what I'm trying to create... <Site> <PSITEID>4</PSITEID> <PSITETILE>Test Site Name</PSITETILE> <PSITEURL>http://www.cnn.com</PSITEURL> <ID>1</ID> <DESCRIPTION><![CDATA[<div>blah blah blah since June 2007.&amp;nbsp; T<br>&amp;nbsp;<br>blah blah blah blah&amp;nbsp; </div ]]></DESCRIPTION> <CREATEDDATE>2010-09-20 14:46:18</CREATEDDATE> <COMMENTS><![CDATA[ Comments example.&#10;]]></COMMENTS> </Site>

    Read the article

  • Change the for loop with lambda(C#3.0)

    - by Newbie
    Is it possible to do the same using Lambda for (int i = 0; i < objEntityCode.Count; i++) { options.Attributes[i] = new EntityCodeKey(); options.Attributes[i].EntityCode = objEntityCode[i].EntityCodes; options.Attributes[i].OrganizationCode = Constants.ORGANIZATION_CODE; } I mean to say to rewrite the statement using lambda. I tried with Enumerable.Range(0,objEntityCode.Count-1).Foreach(i=> { options.Attributes[i] = new EntityCodeKey(); options.Attributes[i].EntityCode = objEntityCode[i].EntityCodes; options.Attributes[i].OrganizationCode = Constants.ORGANIZATION_CODE; }); but not working I am using C#3.0

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering a list using ModelMetadata

    - by rajbk
    This post looks at how to control paging, sorting and filtering when displaying a list of data by specifying attributes in your Model using the ASP.NET MVC framework and the excellent MVCContrib library. It also shows how to hide/show columns and control the formatting of data using attributes.  This uses the Northwind database. A sample project is attached at the end of this post. Let’s start by looking at a class called ProductViewModel. The properties in the class are decorated with attributes. The OrderBy attribute tells the system that the Model can be sorted using that property. The SearchFilter attribute tells the system that filtering is allowed on that property. Filtering type is set by the  FilterType enum which currently supports Equals and Contains. The ScaffoldColumn property specifies if a column is hidden or not The DisplayFormat specifies how the data is formatted. public class ProductViewModel { [OrderBy(IsDefault = true)] [ScaffoldColumn(false)] public int? ProductID { get; set; }   [SearchFilter(FilterType.Contains)] [OrderBy] [DisplayName("Product Name")] public string ProductName { get; set; }   [OrderBy] [DisplayName("Unit Price")] [DisplayFormat(DataFormatString = "{0:c}")] public System.Nullable<decimal> UnitPrice { get; set; }   [DisplayName("Category Name")] public string CategoryName { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? CategoryID { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? SupplierID { get; set; }   [OrderBy] public bool Discontinued { get; set; } } Before we explore the code further, lets look at the UI.  The UI has a section for filtering the data. The column headers with links are sortable. Paging is also supported with the help of a pager row. The pager is rendered using the MVCContrib Pager component. The data is displayed using a customized version of the MVCContrib Grid component. The customization was done in order for the Grid to be aware of the attributes mentioned above. Now, let’s look at what happens when we perform actions on this page. The diagram below shows the process: The form on the page has its method set to “GET” therefore we see all the parameters in the query string. The query string is shown in blue above. This query gets routed to an action called Index with parameters of type ProductViewModel and PageSortOptions. The parameters in the query string get mapped to the input parameters using model binding. The ProductView object created has the information needed to filter data while the PageAndSorting object is used for paging and sorting the data. The last block in the figure above shows how the filtered and paged list is created. We receive a product list from our product repository (which is of type IQueryable) and first filter it by calliing the AsFiltered extension method passing in the productFilters object and then call the AsPagination extension method passing in the pageSort object. The AsFiltered extension method looks at the type of the filter instance passed in. It skips properties in the instance that do not have the SearchFilter attribute. For properties that have the SearchFilter attribute, it adds filter expression trees to filter against the IQueryable data. The AsPagination extension method looks at the type of the IQueryable and ensures that the column being sorted on has the OrderBy attribute. If it does not find one, it looks for the default sort field [OrderBy(IsDefault = true)]. It is required that at least one attribute in your model has the [OrderBy(IsDefault = true)]. This because a person could be performing paging without specifying an order by column. As you may recall the LINQ Skip method now requires that you call an OrderBy method before it. Therefore we need a default order by column to perform paging. The extension method adds a order expressoin tree to the IQueryable and calls the MVCContrib AsPagination extension method to page the data. Implementation Notes Auto Postback The search filter region auto performs a get request anytime the dropdown selection is changed. This is implemented using the following jQuery snippet $(document).ready(function () { $("#productSearch").change(function () { this.submit(); }); }); Strongly Typed View The code used in the Action method is shown below: public ActionResult Index(ProductViewModel productFilters, PageSortOptions pageSortOptions) { var productPagedList = productRepository.GetProductsProjected().AsFiltered(productFilters).AsPagination(pageSortOptions);   var productViewFilterContainer = new ProductViewFilterContainer(); productViewFilterContainer.Fill(productFilters.CategoryID, productFilters.SupplierID, productFilters.ProductName);   var gridSortOptions = new GridSortOptions { Column = pageSortOptions.Column, Direction = pageSortOptions.Direction };   var productListContainer = new ProductListContainerModel { ProductPagedList = productPagedList, ProductViewFilterContainer = productViewFilterContainer, GridSortOptions = gridSortOptions };   return View(productListContainer); } As you see above, the object that is returned to the view is of type ProductListContainerModel. This contains all the information need for the view to render the Search filter section (including dropdowns),  the Html.Pager (MVCContrib) and the Html.Grid (from MVCContrib). It also stores the state of the search filters so that they can recreate themselves when the page reloads (Viewstate, I miss you! :0)  The class diagram for the container class is shown below.   Custom MVCContrib Grid The MVCContrib grid default behavior was overridden so that it would auto generate the columns and format the columns based on the metadata and also make it aware of our custom attributes (see MetaDataGridModel in the sample code). The Grid ensures that the ShowForDisplay on the column is set to true This can also be set by the ScaffoldColumn attribute ref: http://bradwilson.typepad.com/blog/2009/10/aspnet-mvc-2-templates-part-2-modelmetadata.html) Column headers are set using the DisplayName attribute Column sorting is set using the OrderBy attribute. The data is formatted using the DisplayFormat attribute. Generic Extension methods for Sorting and Filtering The extension method AsFiltered takes in an IQueryable<T> and uses expression trees to query against the IQueryable data. The query is constructed using the Model metadata and the properties of the T filter (productFilters in our case). Properties in the Model that do not have the SearchFilter attribute are skipped when creating the filter expression tree.  It returns an IQueryable<T>. The extension method AsPagination takes in an IQuerable<T> and first ensures that the column being sorted on has the OrderBy attribute. If not, we look for the default OrderBy column ([OrderBy(IsDefault = true)]). We then build an expression tree to sort on this column. We finally hand off the call to the MVCContrib AsPagination which returns an IPagination<T>. This type as you can see in the class diagram above is passed to the view and used by the MVCContrib Grid and Pager components. Custom Provider To get the system to recognize our custom attributes, we create our MetadataProvider as mentioned in this article (http://bradwilson.typepad.com/blog/2010/01/why-you-dont-need-modelmetadataattributes.html) protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { ModelMetadata metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);   SearchFilterAttribute searchFilterAttribute = attributes.OfType<SearchFilterAttribute>().FirstOrDefault(); if (searchFilterAttribute != null) { metadata.AdditionalValues.Add(Globals.SearchFilterAttributeKey, searchFilterAttribute); }   OrderByAttribute orderByAttribute = attributes.OfType<OrderByAttribute>().FirstOrDefault(); if (orderByAttribute != null) { metadata.AdditionalValues.Add(Globals.OrderByAttributeKey, orderByAttribute); }   return metadata; } We register our MetadataProvider in Global.asax.cs. protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   ModelMetadataProviders.Current = new MvcFlan.QueryModelMetaDataProvider(); } Bugs, Comments and Suggestions are welcome! You can download the sample code below. This code is purely experimental. Use at your own risk. Download Sample Code (VS 2010 RTM) MVCNorthwindSales.zip

    Read the article

  • How to show linked file size and type in title attributes using jquery?

    - by jitendra
    For example: Before <a target="_blank" href="http://www.adobe.com/devnet/acrobat/pdfs/reader_overview.pdf"> Adobe Reader JavaScript specification </a> Bcoz file is PDF so title should be title="PDF, 93KB, opens in a new window" <a title="PDF, 93KB, opens in a new window" target="_blank" href="http://www.adobe.com/devnet/acrobat/pdfs/reader_overview.pdf" > Adobe Reader JavaScript specification </a>

    Read the article

  • I create a JPanel and GridBagLayout within an object but when I get it in the main object, attributes are missing

    - by chickeneaterguy
    public oijoij() { String name = "Jackie"; int priority = 50; int minPriority = 90; setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 450, 300); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); contentPane.setLayout(new BorderLayout(0, 0)); setContentPane(contentPane); JPanel panel = new JPanel(); GridBagLayout gbc_panel = new GridBagLayout(); gbc_panel.columnWidths = new int[]{0,0,0}; gbc_panel.rowHeights = new int[]{0, 0, 0, 0, 0, 0}; gbc_panel.columnWeights = new double[]{0.0, 0.0, Double.MIN_VALUE}; gbc_panel.rowWeights = new double[]{0.0, 0.0, 0.0, 0.0, 0.0, Double.MIN_VALUE}; panel.setBorder(new LineBorder(new Color(0,0,0),1)); panel.setLayout(gbc_panel); panel.setAlignmentX(Component.LEFT_ALIGNMENT); panel.setMinimumSize(new Dimension(110,110)); panel.setPreferredSize(new Dimension(110, 110)); panel.setSize(new Dimension(110,110)); JLabel lblNewLabel = new JLabel("Process ID:"); GridBagConstraints gbc_lblNewLabel = new GridBagConstraints(); gbc_lblNewLabel.gridheight = 2; gbc_lblNewLabel.insets = new Insets(0, 0, 5, 5); gbc_lblNewLabel.gridx = 0; gbc_lblNewLabel.gridy = 0; panel.add(lblNewLabel, gbc_lblNewLabel); JLabel lblNewLabel_1 = new JLabel(name); GridBagConstraints gbc_lblNewLabel_1 = new GridBagConstraints(); gbc_lblNewLabel_1.gridheight = 2; gbc_lblNewLabel_1.insets = new Insets(0, 0, 5, 0); gbc_lblNewLabel_1.gridx = 1; gbc_lblNewLabel_1.gridy = 0; panel.add(lblNewLabel_1, gbc_lblNewLabel_1); JLabel lblNewLabel_2 = new JLabel("Priority:"); GridBagConstraints gbc_lblNewLabel_2 = new GridBagConstraints(); gbc_lblNewLabel_2.insets = new Insets(0, 0, 5, 5); gbc_lblNewLabel_2.gridx = 0; gbc_lblNewLabel_2.gridy = 2; panel.add(lblNewLabel_2, gbc_lblNewLabel_2); JLabel lblNum = new JLabel(Integer.toString(priority)); GridBagConstraints gbc_lblNum = new GridBagConstraints(); gbc_lblNum.insets = new Insets(0, 0, 5, 0); gbc_lblNum.gridx = 1; gbc_lblNum.gridy = 2; panel.add(lblNum, gbc_lblNum); JLabel lblNewLabel_3 = new JLabel("Min Priority:"); GridBagConstraints gbc_lblNewLabel_3 = new GridBagConstraints(); gbc_lblNewLabel_3.insets = new Insets(0, 0, 5, 5); gbc_lblNewLabel_3.gridx = 0; gbc_lblNewLabel_3.gridy = 3; panel.add(lblNewLabel_3, gbc_lblNewLabel_3); JLabel lblMp = new JLabel(Integer.toString(minPriority)); GridBagConstraints gbc_lblMp = new GridBagConstraints(); gbc_lblMp.insets = new Insets(0, 0, 5, 0); gbc_lblMp.gridx = 1; gbc_lblMp.gridy = 3; panel.add(lblMp, gbc_lblMp); JLabel lblTimeSlice = new JLabel("Time Slice:"); GridBagConstraints gbc_lblTimeSlice = new GridBagConstraints(); gbc_lblTimeSlice.insets = new Insets(0, 0, 0, 5); gbc_lblTimeSlice.gridx = 0; gbc_lblTimeSlice.gridy = 4; panel.add(lblTimeSlice, gbc_lblTimeSlice); Random r = new Random(System.currentTimeMillis()); panel.setBackground(new Color( r.nextInt(255 - 210) + 210, r.nextInt(255 - 210) + 210, r.nextInt(255 - 210) + 210)); } I have accessor methods for the GridBagLayout and the JPanel. When calling the functions in another file, it looks like I just get the JPanel (but without any labels or the layout or other GridBagLayout features). Help?

    Read the article

  • Xpath > How can I select a node based on both its attributes and content?

    - by Andrew Kirk
    Sample XML: <assignments> <assignment id="911990211" section-id="1942268885" item-count="21" sources="foo"> <options> <value name="NumRetakes">4</value> <value name="MultipleResultGrading">6</value> <value name="MaxFeedbackAttempts">-1</value> <value name="ItemTakesBeforeHint">1</value> <value name="TimeAllowed">0</value> </options> </assignment> <assignment id="1425185257" section-id="1505958877" item-count="4" sources="bar"> <options> <value name="NumRetakes">0</value> <value name="MultipleResultGrading">6</value> <value name="MaxFeedbackAttempts">3</value> <value name="ItemTakesBeforeHint">1</value> <value name="TimeAllowed">0</value> </options> </assignment> <assignments> Using XPath, I would like to select all assignments/assignment/options/value nodes where the nodes "name" attribute is "MaxFeedbackAttempts" and the nodes content is "-1". That is to say, I want to return each node that looks like: <value name="MaxFeedbackAttempts">-1</value> I can get each assignments/assignment/options/value node with the specified attribute using: //assignment/options/value[@name="MaxFeedbackAttempts"] I am just not sure how to refine this path to also limit the results based on the nodes content. Is there any way to do this using XPath?

    Read the article

  • How to loop through LI items in UL, get specific attributes, and pass them via $.ajax data

    - by Ahmed Fouad
    Here is my HTML code: <ul id="gallery"> <li id="attachment-32" class="featured"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-34"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-38"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-64"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-75"><a href="..." title=""><img src="..." alt="" /></a></li> <li></li> </ul> Here is my sample ajax request: $.ajax({ url: '/ajax/upload.php', type: 'POST', data: { ... }, success: function(data){ } }); Here is what I want to achieve. How to get the attachment number in ID attribute for every LI inside the gallery UL only when an ID attr is there and pass them via ajax data in this way: { attached : '32,34,38,64,75' } if there is a better way of doing this let me know I want to pass the list items which contain attachments to process. How to get the list item LI which has class of featured e.g. { featured_img : .. } and pass the attachment ID number if featured LI exists, and if none of list items is featured pass featured_img with 0. So i know how to process it via php in the request. Any help is appreciated. Thank you.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >