Search Results

Search found 9518 results on 381 pages for 'explicit implementation'.

Page 49/381 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Simulating aspects of static-typing in a duck-typed language

    - by Mike
    In my current job I'm building a suite of Perl scripts that depend heavily on objects. (using Perl's bless() on a Hash to get as close to OO as possible) Now, for lack of a better way of putting this, most programmers at my company aren't very smart. Worse, they don't like reading documentation and seem to have a problem understanding other people's code. Cowboy coding is the game here. Whenever they encounter a problem and try to fix it, they come up with a horrendous solution that actually solves nothing and usually makes it worse. This results in me, frankly, not trusting them with code written in duck typed language. As an example, I see too many problems with them not getting an explicit error for misusing objects. For instance, if type A has member foo, and they do something like, instance->goo, they aren't going to see the problem immediately. It will return a null/undefined value, and they will probably waste an hour finding the cause. Then end up changing something else because they didn't properly identify the original problem. So I'm brainstorming for a way to keep my scripting language (its rapid development is an advantage) but give an explicit error message when an an object isn't used properly. I realize that since there isn't a compile stage or static typing, the error will have to be at run time. I'm fine with this, so long as the user gets a very explicit notice saying "this object doesn't have X" As part of my solution, I don't want it to be required that they check if a method/variable exists before trying to use it. Even though my work is in Perl, I think this can be language agnostic.

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Change default polymorphism in Hibernate

    - by Sujee
    Hi, I'd like to set polymorphism="explicit" property to several hibernate mapping classes. is it possible to override the default implicit value, so that I do not need to set explicit in multiple classes? <class name="xxxxx" table="XXXXX" polymorphism="explicit"> I am using hibernate mapping xml files (No annotations or JPA) and it was loaded by Spring as follows, <bean id="xxxsessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="xxxDataSource"/> </property> <property name="mappingResources"> <list> <value>xxx.hbm.xml</value> -------- -------- </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.OracleDialect</prop> ---- ---- </props> </property> </bean> Thank you.

    Read the article

  • calling template function without <>; type inference

    - by Oops
    Hi, if I have a function template with typename T, where the compiler can set the type by itself, I do not have to write the type explicitely when I call the function like: template < typename T > T min( T v1, T v2 ) { return ( v1 < v2 ) ? v1: v2; } int i1 = 1, i2 = 2; int i3 = min( i1, i2 ); //no explicit <type> but if I have a function template with two different typenames like... template < typename TOut, typename TIn > TOut round( TIn v ) { return (TOut)( v + 0.5 ); } double d = 1.54; int i = round<int>(d); //explicit <int> Is it true that I have to specify at least 1 typename, always? I assume the reason is because C++ can not distinguish functions between different return types, true? but if I use a void function and handover a reference, again I must not explicitely specify the return typename: template < typename TOut, typename TIn > void round( TOut & vret, TIn vin ) { vret = (TOut)(vin + 0.5); } double d = 1.54; int i; round(i, d); //no explicit <int> should the conclusion be to avoid functions with return and more prefer void functions that return via a reference when writing templates? Or is there a possibility to avoid explicitely writing the return type? something like "type inference" for templates... is "type inference" possible in C++0x? I hope I was not too unclear. many thanks in advance Oops

    Read the article

  • Questions on Juval Lowy's IDesign C# Coding Standard

    - by Jan
    We are trying to use the IDesign C# Coding standard. Unfortunately, I found no comprehensive document to explain all the rules that it gives, and also his book does not always help. Here are the open questions that remain for me (from chapter 2, Coding Practices): No. 26: Avoid providing explicit values for enums unless they are integer powers of 2 No. 34: Always explicitly initialize an array of reference types using a for loop No. 50: Avoid events as interface members No. 52: Expose interfaces on class hierarchies No. 73: Do not define method-specific constraints in interfaces No. 74: Do not define constraints in delegates Here's what I think about those: I thought that providing explicit values would be especially useful when adding new enum members at a later point in time. If these members are added between other already existing members, I would provide explicit values to make sure the integer representation of existing members does not change. No idea why I would want to do this. I'd say this totally depends on the logic of my program. I see that there is alternative option of providing "Sink interfaces" (simply providing already all "OnXxxHappened" methods), but what is the reason to prefer one over the other? Unsure what he means here: Could this mean "When implementing an interface explicitly in a non-sealed class, consider providing the implementation in a protected virtual method that can be overridden"? (see Programming .NET Components 2nd Edition, end of chapter “Interfaces and Class Hierarchies”). I suppose this is about providing a "where" clause when using generics, but why is this bad on an interface? I suppose this is about providing a "where" clause when using generics, but why is this bad on a delegate?

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • proftpd - TLS connection hangs authenticating

    - by greydet
    I setup a proftpd server that uses TLS/SSL certificate for authentication. Everything works well when I connect through lftp or Filezilla (with explicit connection). But once I attempt connecting with simple ftp connection from Filezilla, the USER command ends with the 550 response (SSL/TLS required). After that any further connection through lftp or Filezilla (with explicit connection) will hang authenticating. Anyone knows how to workaround this issue? Is there a way to ask Filezilla to automatically use TLS/SSL if required? I am using Ubuntu server 10.04 with proftpd 1.3.2c. There is no error message in the log files.

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • How is it possible to list all folders that a particular user/group has permissions on?

    - by Lord Torgamus
    Is it possible to list all folders/files that a given group has explicit permissions on, for a machine running Windows Server 2003? If so, how? It would be nice to see inherited permissions as well, but I could do with just explicit permissions. A little background: I'm trying to update groups/permissions on a test server. One of the groups, Devs, wasn't implemented correctly when it was created, and my goal is to remove it from the system. It has been replaced by LeadDevelopers, which has permissions on many — but naturally not all — of the same folders. I want to make sure that I don't accidentally orphan any folders or cause any other issues when I remove Devs. It did have some admin-level permissions.

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Points on lines where the two lines are the closest together

    - by James Bedford
    Hey guys, I'm trying to find the points on two lines where the two lines are the closest. I've implemented the following method (Points and Vectors are as you'd expect, and a Line consists of a Point on the line and a non-normalized direction Vector from that point): void CDClosestPointsOnTwoLines(Line line1, Line line2, Point* closestPoints) { closestPoints[0] = line1.pointOnLine; closestPoints[1] = line2.pointOnLine; Vector d1 = line1.direction; Vector d2 = line2.direction; float a = d1.dot(d1); float b = d1.dot(d2); float e = d2.dot(d2); float d = a*e - b*b; if (d != 0) // If the two lines are not parallel. { Vector r = Vector(line1.pointOnLine) - Vector(line2.pointOnLine); float c = d1.dot(r); float f = d2.dot(r); float s = (b*f - c*e) / d; float t = (a*f - b*c) / d; closestPoints[0] = line1.positionOnLine(s); closestPoints[1] = line2.positionOnLine(t); } else { printf("Lines were parallel.\n"); } } I'm using OpenGL to draw three lines that move around the world, the third of which should be the line that most closely connects the other two lines, the two end points of which are calculated using this function. The problem is that the first point of closestPoints after this function is called will lie on line1, but the second point won't lie on line2, let alone at the closest point on line2! I've checked over the function many times but I can't see where the mistake in my implementation is. I've checked my dot product function, scalar multiplication, subtraction, positionOnLine() etc. etc. So my assumption is that the problem is within this method implementation. If it helps to find the answer, this is function supposed to be an implementation of section 5.1.8 from 'Real-Time Collision Detection' by Christer Ericson. Many thanks for any help!

    Read the article

  • Passing multiple Vertex Attributes in GLSL 130

    - by Roy T.
    (note this question is closely related to this one however I didn't fully understand the accepted answer) To support videocards in laptops I have to rewrite my GLSL 330 shaders to GLSL 130. I'm trying to do this but somehow I don't get vertex attributes to work properly. My 330 shaders look like this: #version 330 layout(location = 0) in vec4 position; layout(location = 3) in vec4 color; smooth out vec4 theColor; void main() { gl_Position = position; theColor = color; } Now this explicit layout is not allowed in GLSL 130 so I referenced this page to see what the default layouts for some values would be. As you can see position should be the 0th vertex attribute and color should be the 3rd vertex attribute. Because this is a test case I had already configured my explicit layouts in the same way, which worked, so I now simply rewrote my shader to this and expected it to work: #version 130 smooth out vec4 theColor; void main() { gl_Position = gl_Vertex; theColor = gl_Color; } However this doesn't work, the value of gl_Color is always (1,1,1,1). So how should I pass multiple vertex attributes to my GLSL 130 shaders? For reference, this is how I set my vertex buffer object and attributes (I've just adapted this tutorial to JAVA+JOGL) gl.glBindBuffer(GL3.GL_ARRAY_BUFFER, vertex_buffer_id); gl.glEnableVertexAttribArray(0); gl.glEnableVertexAttribArray(3); gl.glVertexAttribPointer(0, 4 , GL3.GL_FLOAT, false, 0, 0); gl.glVertexAttribPointer(3, 4, GL3.GL_FLOAT, false, 0, 4*4*4); gl.glDrawArrays(GL3.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableVertexAttribArray(0); gl.glDisableVertexAttribArray(3); EDIT I solved the problem by querying for the layout locations of position an color using glGetAttribLocation however I still don't understand why the 'hardcoded' values like gl_Color didn't work, can't I upload data in there as normal? Shouldn't they be used?

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • SOA &amp; Application Grid Specialization &ndash; 6 steps to success &ndash; part 1 OMM

    - by Jürgen Kress
    SOA Specialization – Oracle Open Market Model (OMM) Dear Application Grid SOA Partners, Or goal is to SOA Specialize you, in the next weeks we will inform you in a series how you can achieve SOA Specialization. Specialization is key the be recognized by Oracle and to be preferred by our Customers. The first step to become SOA Specialized is to proof 2 transactions. You can either resell, co-sell or referral – as a proof point we do use our Open Market Model (OMM). To create your account go to our new Partner Portal: go to login of your OPN-Homepage: http://oraclepartnernetwork.oracle.com click on: "Sales" "Create a PRM User Account" Enter your User ID: Enter Company Identifier: ((please ask your OPN IC)) Finish Wait for a Confirmation Email If you need OMM support please contact out dedicated team: Nordics  please ask: [email protected] Portugal, Spain please ask: [email protected] Austria, Belgium, Germany, Luxembourg, Netherlands, Switzerland, United Arab Emirates, United Kingdom please ask: [email protected] For more information about OMM watch our on-demand webcast “Recognising the Value of Partners: Register Oracle Deals through the Open Market Model (OMM)”. Become SOA Specialized today SOA Specialized & Application Grid Specialized Create your references, create your OMM Entry, take the SOA Sales assessment, take the SOA Pre-Sales assessment, take the Support assessment and register for the SOA Implementation assessment. For more information on Specialization please visit our OPN Specialized Webcast Series To get support on Specialization please contact the Partner Business Centers.   SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4

    Read the article

  • When is type testing OK?

    - by svidgen
    Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB. Wonderful. But, we're still given is a-like operations in most, if not all, type-safe languages. And that seems suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO) – Your pre-build & tested middleware platform

    - by JuergenKress
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected] SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: AFPO,Accenture,middleware platform,oracle middleware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Looking for some OO design advice

    - by Andrew Stephens
    I'm developing an app that will be used to open and close valves in an industrial environment, and was thinking of something simple like this:- public static void ValveController { public static void OpenValve(string valveName) { // Implementation to open the valve } public static void CloseValve(string valveName) { // Implementation to close the valve } } (The implementation would write a few bytes of data to the serial port to control the valve - an "address" derived from the valve name, and either a "1" or "0" to open or close the valve). Another dev asked whether we should instead create a separate class for each physical valve, of which there are dozens. I agree it would be nicer to write code like PlasmaValve.Open() rather than ValveController.OpenValve("plasma"), but is this overkill? Also, I was wondering how best to tackle the design with a couple of hypothetical future requirements in mind:- We are asked to support a new type of valve requiring different values to open and close it (not 0 and 1). We are asked to support a valve that can be set to any position from 0-100, rather than simply "open" or "closed". Normally I would use inheritance for this kind of thing, but I've recently started to get my head around "composition over inheritance" and wonder if there is a slicker solution to be had using composition?

    Read the article

  • Is there a clean separation of my layers with this attempt at Domain Driven Design in XAML and C#

    - by Buddy James
    I'm working on an application. I'm using a mixture of TDD and DDD. I'm working hard to separate the layers of my application and that is where my question comes in. My solution is laid out as follows Solution MyApp.Domain (WinRT class library) Entity (Folder) Interfaces(Folder) IPost.cs (Interface) BlogPosts.cs(Implementation of IPost) Service (Folder) Interfaces(Folder) IDataService.cs (Interface) BlogDataService.cs (Implementation of IDataService) MyApp.Presentation(Windows 8 XAML + C# application) ViewModels(Folder) BlogViewModel.cs App.xaml MainPage.xaml (Contains a property of BlogViewModel MyApp.Tests (WinRT Unit testing project used for my TDD) So I'm planning to use my ViewModel with the XAML UI I'm writing a test and define my interfaces in my system and I have the following code thus far. [TestMethod] public void Get_Zero_Blog_Posts_From_Presentation_Layer_Returns_Empty_Collection() { IBlogViewModel viewModel = _container.Resolve<IBlogViewModel>(); viewModel.LoadBlogPosts(0); Assert.AreEqual(0, viewModel.BlogPosts.Count, "There should be 0 blog posts."); } viewModel.BlogPosts is an ObservableCollection<IPost> Now.. my first thought is that I'd like the LoadBlogPosts method on the ViewModel to call a static method on the BlogPost entity. My problem is I feel like I need to inject the IDataService into the Entity object so that it promotes loose coupling. Here are the two options that I'm struggling with: Not use a static method and use a member method on the BlogPost entity. Have the BlogPost take an IDataService in the constructor and use dependency injection to resolve the BlogPost instance and the IDataService implementation. Don't use the entity to call the IDataService. Put the IDataService in the constructor of the ViewModel and use my container to resolve the IDataService when the viewmodel is instantiated. So with option one the layers will look like this ViewModel(Presentation layer) - Entity (Domain layer) - IDataService (Service Layer) or ViewModel(Presentation layer) - IDataService (Service Layer)

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Absence Management White Papers to Assist with your Implementations

    - by Carolyn Cozart
    Absence Management Setup – Additional Resources PeopleSoft is committed to helping our customers sharing our knowledge expertise in our applications. We have prepared a collection of documents (White Papers) containing examples, tips, and techniques to help you when making important decisions during your Absence Management implementation.   These documents can all be found on My Oracle Support. Absence Management Entitlement and Take Setup This document (Document ID 1493866.1) provides an overview of how to set up the main components of Absence Management, such as Absence Entitlement and Take elements, as well as other supporting elements relevant to your Absence Management implementation. Absence Management System Elements This document (Document ID 1493879.1) provides an overview of the system elements related to Absence Management. System elements are building blocks used during the design and construction of your Absence Rules. Knowing how they work and when to use them should help you expedite the implementation of your Absence Policy rules in your company Absence Management Self Service Setup This document (Document ID 1493867.1) provides an overview and guidance on some of the important areas when setting up Absence Self Service. Throughout this document we are providing examples of different configurations supported in Self Service. 

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >