Search Results

Search found 2729 results on 110 pages for 'fk relationship'.

Page 109/110 | < Previous Page | 105 106 107 108 109 110  | Next Page >

  • WCF. BasicHttpBinding Certificates.

    - by Andrew Kalashnikov
    Hello colleagues. I've got some problems. I've created WCF service with basicHttpBinding and hosted by IIS 6.0. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate storeLocation="LocalMachine" storeName="My" findValue="CN=Server" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> Also I have cert authority on this server and I issue certs for server and client. I server cert at server and client cert at client. When I try consume service from client I get famous: "Could not establish trust relationship for the SSL/TLS secure channel with authority" All sites recommend override ServicePointManager.ServerCertificateValidationCallback by set return value to true. Bu I want decide this issue other right way. My client config: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientBehavior"> <clientCredentials> <serviceCertificate> <authentication certificateValidationMode="ChainTrust" revocationMode="NoCheck"/> </serviceCertificate> <clientCertificate findValue="CN=PharmPortal" storeLocation="LocalMachine" storeName="My"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IRegistrator" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> </security> </binding> </basicHttpBinding> </bindings> <client > <endpoint address="https://aurit-server2/Registrator.svc" binding="basicHttpBinding" behaviorConfiguration="ClientBehavior" bindingConfiguration="BasicHttpBinding_IRegistrator" contract="ServiceReference1.IRegistrator" name="BasicHttpBinding_IRegistrator" > <identity> <dns value="Server" /> </identity> </endpoint> </client> </system.serviceModel> I set up client certificate. Why i get error?

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • jQuery works in FF but not in Safari

    - by Hristo
    I have some event handlers that work in FF and not in Safari. Simply put, I have a list of friends, some hard-coded, some pulled in from a database. Clicking on a buddy opens a chat window... this is much like the Facebook chat system. So in Firefox, everything works normally and as expected. In Safari, clicking on buddies that are hard-coded works fine, but clicking on buddies that are pulled in from the database doesn't pull up the chat window. <script type="text/javascript" src="js/jQuery.js"></script> <script type="text/javascript" src="js/chat.js"></script> <script type="text/javascript" src="js/ChatBar.js"></script> <script type="text/javascript" src="js/settings.js"></script> <script type="text/javascript"> var chat = new Chat(); var from = <?php echo "'" .$_SESSION['userid'] . "'"; ?>; chat.getUsers(<?php echo "'" .$_SESSION['userid'] . "'"; ?>); </script> So I load all my buddies with chat.getUsers. That function is: // get list of friends function getBuddyList(userName) { userNameID = userName; $.ajax({ type: "GET", url: "buddyList.php", data: { 'userName': userName, 'current': numOfUsers }, dataType: "json", cache: false, success: function(data) { if (numOfUsers != data.numOfUsers) { numOfUsers = data.numOfUsers; var list = "<li><span>Agents</span></li>"; for (var i = 0; i < data.friendlist.length; i++) { list += "<li><a class=\"buddy\" href=\"#\"><img alt=\"\" src=\"images/chat-thumb.gif\">"+ data.friendlist[i] +"</a></li>"; } $('#friend-list ul').append($(list)); } setTimeout('getBuddyList(userNameID)', 1000); } }); } buddyList.php just pulls in the Users from the database and returns an array with the user names. So the jQuery for clicking a buddy is: // click on buddy in #friends-panel $('#friends-panel a.buddy').click(function() { alert("Loaded"); // close #friends-panel $('.subpanel').hide(); $('#friends-panel a.chat').removeClass('active'); // if a chat window is already active, close it and deactivate $('#mainpanel li[class="active-buddy-tab"] div').not('#chat-box').removeAttr('id'); $('#mainpanel li[class="active-buddy-tab"]').removeClass('active-buddy-tab').addClass('buddy-tab'); // create active buddy chat window $('#mainpanel').append('<li class="active-buddy-tab"><a class="buddy-tab" href="#"></a><div id="chat-window"><h3><p id="to"></p></h3></div></li>'); // create name and close/minimize buttons $('.active-buddy-tab div h3 p#to').text($(this).text()); $('.active-buddy-tab div h3').append('<span class="close"> X </span><span class="minimize"> &ndash; </span>'); $('.active-buddy-tab').append('<span class="close"> X </span>'); // create chat area $('.active-buddy-tab div').append('<div id="chat-box"></div><form id="chat-message"><textarea id="message" maxlength="100"></textarea></form>'); // put curser in chat window $('.active-buddy-tab #message').focus(); // create a chat relationship return false; }); ... and the basic structure of the HTML is: <div id="footpanel"> <ul id="mainpanel"> <li id="friends-panel"> <a href="#" class="chat">Friends (<strong>18</strong>) </a> <div id="friend-list" class="subpanel"> <h3><span> &ndash; </span>Friends Online</h3> <ul> <li><span>Family Members</span></li> <!-- Hard coded buddies --> <li><a href="#" class="buddy"><img src="images/chat-thumb.gif" alt="" /> Your Friend 1</a></li> <li><a href="#" class="buddy"><img src="images/chat-thumb.gif" alt="" /> Your Friend </a></li> <!-- buddies will be added in dynamically here --> </ul> </div> </li> </ul> </div> I'm not too sure where to begin solving this issue. I thought it might be a rendering bug or something with the DOM but I've been staring at this code all day and I'm stuck. Any ideas on why it works in FF and not in Safari? btw... I'm testing on Snow Leopard. Thanks, Hristo

    Read the article

  • Need help with writing from XML to a SQL Server database (detailed)

    - by fogedi
    I'm a bit of a newbie with XML and WebServices and stuff like that. I'm working on a project using GlassFish OpenESB to schedule a process to get some information from a webservice and then store in a database. The criteria is basically that i have to use GlassFish OpenESB or EJB modules where i can expose webservices or something along those lines, AND i have to use SQL Server 2005. So far I've been able to talk to the webservice: and receive something along those lines <?xml version="1.0" encoding="UTF-8" standalone="no"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <m:entrypoint_getSettlementsOperationResponse xmlns:m="http://j2ee.netbeans.org/wsdl/BorgunTestBPEL/entrypoint_getSettlements"> <part1> <GetSettlementsByMerchantResponse xmlns="http://Borgun.Services.Gateway/2010/04/Settlement"> <GetSettlementsByMerchantResult xmlns:a="http://schemas.datacontract.org/2004/07/Borgun.Library.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:msgns="http://Borgun.Services.Gateway/2010/04/Settlement" xmlns:ns0="http://j2ee.netbeans.org/wsdl/BorgunTestBPEL/entrypoint_getSettlements"> <a:CreditCardSettlement> <a:amexAmount>XXX</a:amexAmount> <a:amount>XXXX</a:amount> <a:batches> <a:CreditCardBatch> <a:batchdate>xxx</a:batchdate> <a:batchnumber>XXXX</a:batchnumber> <a:currencyCode>xxxx</a:currencyCode> <a:merchantnumber>xxxx</a:merchantnumber> <a:settlementRunNumber>xx4</a:settlementRunNumber> <a:settlementdate>2010-04-06T00:00:00</a:settlementdate> <a:slips>2</a:slips> <a:sum>xxxx</a:sum> </a:CreditCardBatch> <a:CreditCardBatch> <a:batchdate>xxx</a:batchdate> <a:batchnumber>xxxxx</a:batchnumber> <a:currencyCode>xxxx</a:currencyCode> <a:merchantnumber>xxxx</a:merchantnumber> <a:settlementRunNumber>xxxx</a:settlementRunNumber> <a:settlementdate>xxxx</a:settlementdate> <a:slips>x</a:slips> <a:sum>xxx</a:sum> </a:CreditCardBatch> </a:batches> <a:commission>xx</a:commission> <a:currencyCode>xxx</a:currencyCode> <a:deduction>-xxx</a:deduction> <a:deductionItems> <a:CrediCardSettlementDeduction> <a:amount>-xxx</a:amount> <a:code>VIÐSKF</a:code> <a:currencyCode>ISK</a:currencyCode> <a:merchantnumber>xxx</a:merchantnumber> <a:settlementrunnumber>xxx</a:settlementrunnumber> <a:text>Afsláttur v/ekorta</a:text> </a:CrediCardSettlementDeduction> <a:CrediCardSettlementDeduction> <a:amount>-335.00</a:amount> <a:code>ÁLAGKREK</a:code> <a:currencyCode>ISK</a:currencyCode> <a:merchantnumber>xxx</a:merchantnumber> <a:settlementrunnumber>xxx</a:settlementrunnumber> <a:text>xxx</a:text> </a:CrediCardSettlementDeduction> </a:deductionItems> </a:CreditCardSettlement> </GetSettlementsByMerchantResult> </GetSettlementsByMerchantResponse> </part1> </m:entrypoint_getSettlementsOperationResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> I have access to the SQL Server 2005 server which is remote and i know i can insert into it but given that now i have a one-to-many relationship i want to be able to rollback if something fails. So in short how can I insert from this XML into the DB preferably without manually walking through the XML tree? I'm pretty sure I'm supposed to be able to use Entity and Session Beans or maybe JAXB bindings but I'm simply not being successful. One of the reasons might have something to do with the fact that the soap response contains an array of CreditCardSettlements and each of which contains an array of Batches and DeductionItems It would be best if someone can help me do this via a BPEL in GlassFish OpenESB but any hint at a java solution is much appreciated.

    Read the article

  • Java AD Authentication across Trusted Domains

    - by benjiisnotcool
    I am trying to implement Active Directory authentication in Java which will be ran from a Linux machine. Our AD set-up will consist of multiple servers that share trust relationships with one another so for our test environment we have two domain controllers: test1.ad1.foo.com who trusts test2.ad2.bar.com. Using the code below I can successfully authenticate a user from test1 but not on test2: public class ADDetailsProvider implements ResultSetProvider { private String domain; private String user; private String password; public ADDetailsProvider(String user, String password) { //extract domain name if (user.contains("\\")) { this.user = user.substring((user.lastIndexOf("\\") + 1), user.length()); this.domain = user.substring(0, user.lastIndexOf("\\")); } else { this.user = user; this.domain = ""; } this.password = password; } /* Test from the command line */ public static void main (String[] argv) throws SQLException { ResultSetProvider res = processADLogin(argv[0], argv[1]); ResultSet results = null; res.assignRowValues(results, 0); System.out.println(argv[0] + " " + argv[1]); } public boolean assignRowValues(ResultSet results, int currentRow) throws SQLException { // Only want a single row if (currentRow >= 1) return false; try { ADAuthenticator adAuth = new ADAuthenticator(); LdapContext ldapCtx = adAuth.authenticate(this.domain, this.user, this.password); NamingEnumeration userDetails = adAuth.getUserDetails(ldapCtx, this.user); // Fill the result set (throws SQLException). while (userDetails.hasMoreElements()) { Attribute attr = (Attribute)userDetails.next(); results.updateString(attr.getID(), attr.get().toString()); } results.updateInt("authenticated", 1); return true; } catch (FileNotFoundException fnf) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught File Not Found Exception trying to read cris_authentication.properties"); results.updateInt("authenticated", 0); return false; } catch (IOException ioe) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught IO Excpetion processing login"); results.updateInt("authenticated", 0); return false; } catch (AuthenticationException aex) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught Authentication Exception attempting to bind to LDAP for [{0}]", this.user); results.updateInt("authenticated", 0); return true; } catch (NamingException ne) { Logger.getAnonymousLogger().log(Level.WARNING, "Caught Naming Exception performing user search or LDAP bind for [{0}]", this.user); results.updateInt("authenticated", 0); return true; } } public void close() { // nothing needed here } /** * This method is called via a Postgres function binding to access the * functionality provided by this class. */ public static ResultSetProvider processADLogin(String user, String password) { return new ADDetailsProvider(user, password); } } public class ADAuthenticator { public ADAuthenticator() throws FileNotFoundException, IOException { Properties props = new Properties(); InputStream inStream = this.getClass().getClassLoader(). getResourceAsStream("com/bar/foo/ad/authentication.properties"); props.load(inStream); this.domain = props.getProperty("ldap.domain"); inStream.close(); } public LdapContext authenticate(String domain, String user, String pass) throws AuthenticationException, NamingException, IOException { Hashtable env = new Hashtable(); this.domain = domain; env.put(Context.INITIAL_CONTEXT_FACTORY, com.sun.jndi.ldap.LdapCtxFactory); env.put(Context.PROVIDER_URL, "ldap://" + test1.ad1.foo.com + ":" + 3268); env.put(Context.SECURITY_AUTHENTICATION, simple); env.put(Context.REFERRAL, follow); env.put(Context.SECURITY_PRINCIPAL, (domain + "\\" + user)); env.put(Context.SECURITY_CREDENTIALS, pass); // Bind using specified username and password LdapContext ldapCtx = new InitialLdapContext(env, null); return ldapCtx; } public NamingEnumeration getUserDetails(LdapContext ldapCtx, String user) throws NamingException { // List of attributes to return from LDAP query String returnAttributes[] = {"ou", "sAMAccountName", "givenName", "sn", "memberOf"}; //Create the search controls SearchControls searchCtls = new SearchControls(); searchCtls.setReturningAttributes(returnAttributes); //Specify the search scope searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE); // Specify the user to search against String searchFilter = "(&(objectClass=*)(sAMAccountName=" + user + "))"; //Perform the search NamingEnumeration answer = ldapCtx.search("dc=dev4,dc=dbt,dc=ukhealth,dc=local", searchFilter, searchCtls); // Only care about the first tuple Attributes userAttributes = ((SearchResult)answer.next()).getAttributes(); if (userAttributes.size() <= 0) throw new NamingException(); return (NamingEnumeration) userAttributes.getAll(); } From what I understand of the trust relationship, if trust1 receives a login attempt for a user in trust2, then it should forward the login attempt on to it and it works this out from the user's domain name. Is this correct or am I missing something or is this not possible using the method above? --EDIT-- The stack trace from the LDAP bind is {java.naming.provider.url=ldap://test1.ad1.foo.com:3268, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, java.naming.security.authentication=simple, java.naming.referral=follow} 30-Oct-2012 13:16:02 ADDetailsProvider assignRowValues WARNING: Caught Authentication Exception attempting to bind to LDAP for [trusttest] Auth error is [LDAP: error code 49 - 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db0]

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • CodePlex Daily Summary for Sunday, February 13, 2011

    CodePlex Daily Summary for Sunday, February 13, 2011Popular ReleasesTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroAutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...New ProjectsAbstract | .NET DDD abstraction for infra-structure (Data, Blobs, Queues): In the last few years we have seen many tools abstract access to infra-structures. They are all very different - what makes it difficult for you to move from Azure or to Azure. Abstract makes migration easier by standardising access to these infra-structures.Apex APRS: Apex APRS is a new APRS client application that is unlike any other. Key Features: Online and offline-cached map viewing from multiple popular sources Fast, simple, intuitive & powerful user interface Customizable Notification System: Customizable Notification SystemDaniel Singleton for C++: An elegant solution for C++ singletons using dependency declaration to control lifetime. One object created during any execution, lazy-init, thread safety... nice and compact.Deduplicator: Deduplicator helps to organize your file system. Create one folder organized by choice containing unique files. To be used for photo's, mp3's or any other binary format. Deduplicator is released yet, user interface is limited and some hardcoding is still in placeDigitypon (ASP.NET MVC 3): Digitypon will be a new web application specialized to be used by those who want to set an e-newspaper or an e-magazine. The main difference among other CMSs is that Digitypon’s workflow is a virtualized way of how employees of printed matters (newspaperes, magazinews) work.EdgeJournalImporter: Import journal files written on the Entourage (Pocket) Edge into Microsoft OneNote 2007+FlatFileSerializer: Serialize and deserialize flat file records from and to self defined classes using Attributes.Google Chart Helper: Controls to insert Google Charts to your web application. No Javascript code to do. We do it for you !How to display records from MySQL 5.1 database in asp.net using VB.net or CSharp: How to display records from MySQl 5.1+ database in asp.net with vb.net or C# code.HTTP Filer: HTTP Filer is a utility that allow users to share files and documents over http protocol. This utility was designed especially for Windows Phone users to send files from computer to their phone easily without send emails with attachments or upload files to an internet server.ibamonitoring: Source code for the avian point-count data collection web site www.ibamonitoring.org.JoPack Ultra Light Packaging for large teams: JoPack is an opensource ultra light package management software – that is targeted for simplifying development with large teams sharing volatile assemblies across several solutions. Latest project source code can be found on project home site: http://code.google.com/p/jo-pack/ L-System Turtle Based Fractal Tool (L-Fractal Tool): A tool to help you play with L-System turtle graphic based fractal curves( http://en.wikipedia.org/wiki/L-system) This tool helps you look into some of the well known curves & lets you define new patterns & production rules to build your own. Have a fun-fractal day !mailer: mailer is a application to mail. It's developed in Python.NJamb: A C# DSL for more rigorous tests: NJamb is a C# syntax for tests and DDD specifications. It makes them more readable, faster to write, and more rigorous. Its Linq-style expressions can assert preconditions and postconditions. IntelliSense makes the syntax almost foolproof. And, it's designed to be extended.NUpdater: NUpdater makes it easier for .NET Framework developers to add auto-updating capability to their software. Putting together numerous patching capabilities, this library is an all-around updater. Developed in C# with CLS compliance (this library is fully compatible with Mono).Perihelia - The .NET & Silverlight Socket Project: Perihelia is an open-source socket framework. The framework includes (or will include) all the necessities you need to satisfy your networking needs. Windows and WPF applications are currently supported, and Silverlight applications will be supported soon.PLogger: PLogger is a light, fast configuration-less file appender logger build using a parallel pipeline architecture. It is much easier and faster to set up and use then Log4Net or the enterprise libraryQuasar: Quasar is a professional .Net utility library which adds sugar on .net framework.Sectors Game Engine: Sectors is a XNA-based 2.5D (Doom-like) game engine with console and scripting support for Windows.SharePoint 2010 Server-Side-Scanner WebPart - embDocumentInhalator: embDocumentInhalator makes it possible for SharePoint 2010 users to scan documents from scanners attached directly to the server. For developers it may help to see the relationship between the individual components required. SIAJUR: Projeto Web para controle de documentossvcutil2: svcutil2 generates Wcf client proxies from Wsdl2 documents.TemporalMemoryNetwork: TemporalMemoryNetwork is a research project exploring how dynamical systems can store and represent patterns that occur through time.WebDAV#: This project aims to implement WebDAV support for .NET, both for client software as well as software hosting their own WebDAV server. The project will start with the server portion. The project will be developed in C# 3.5 for .NET 3.5 and 4.0.

    Read the article

  • CodePlex Daily Summary for Monday, November 19, 2012

    CodePlex Daily Summary for Monday, November 19, 2012Popular ReleasesmojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...VidCoder: 1.4.6 Beta: Brought back the x264 advanced options panel due to popular demand. Thank you for all the feedback. x264 Preset/Profile/Tune/Level has been moved back to the Video tab, along with a copy of the "extra options" string. Added Fast Decode and Zero Latency checkboxes to support multiple Tunes. Added cropping option "None". Audio bitrates that are incompatible with the encoder (such as MP3 > 320 kbps) are no longer preset on the list. Fixed crash on opening VidCoder after de-selecting "re...Metodología General Ajustada - MGA: 03.05.02: Cambios Parmenio: Correcciones al formato F03 de programación, se deja en comentarios la validación de la unidad de la actividad sea igul a la del indicador. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.SPListViewFilter: Version 1.8: Fixed some bugsDotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...fastJSON: v2.0.10: - added MonoDroid projectxUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects1119P1: So far, I haven't found any bugcoolow: a simple projectDatabase Tools: Windows application for managing SQL Server databases.Editable WILEz Books: Sorry for my bad enlgish. I'm italian. With this project you can write a simple book with images, you can customize text font, color, beckground immage ecc.. simply with a editable txt file.EstimateTracker: Program to track estimate time using XAML, MVVM, WPF, ninject Ioc, nhibernate and Microsoft PrismExtJS based ASP.NET 2.0 Controls: About FineUI ExtJS based professional ASP.NET 2.0 Controls. FineUI Mission Create No JavaScript, No CSS, No UpdatePanel, No ViewState and No WebServices web apGCU: This project supports the Gedcom Utility which allows users to review many Gedcom files for certain information.Heng.Elements: Entity Relationship ModelingiRoboticsPrototype1: iRobotics Prototype 1. Under developmentJamaican kitchen: This a website which will display various jamaican food. these dishes which be ranging from mild to spicey food. JarvisProject1: ???? ?????? ??????? ??????? - ????????????? ?????????? ? ???????? ????????? ??????Java 2D Game Developing Setup: Set of classes as "interface" between game powering code and creative game development in Java. KZ.Express: Project is build to resolve bill managementlbpWGaeBlog: my blog on gaeLogistic Management System: D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t? D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t?D? án giao nh?n v?n t?i logistics. D? án Managed3D: Managed3D is a scene graph API that allows developers to have both high-level and low-level access to objects in a 3-dimensional scene.MicroTao: MicroTao is for the future.MVC4 ASPX TourBooking: Website Booking tourMyDay: Simple little spike, using a todo list. Spiking MVC 4, Twitter Bootstrap, code-first migrations in EF 5, and AppHarbor deployments. mytestmusicstoremvc: my Study mvcNDateTime: NDateTime is a javascript library that wrap the most commons properties and methods of .NET DateTime object.onexin: This is test.PersiaCaptchaHandler: A Persian Captcha that use number in lettersPhoneGap/Cordova Libs, PhoneGap Demos, PhoneGap Solutions, PhoneGap Practices: Here you can find PhoneGap/Cordova Libs, demos, solutions, best practices, architectures, etc. Most important, all should be the best and free.PhotOrganizer: Windows application to organize your pictures. Scans folder for number and size of pictures. Moves to destination by year-month and removes duplicate files.PoNCE: PoNCE Engine helps creating of Point and Click quest gamesPragTest: List of my projectsProof of Concept Code: This is pretty much throw-away PoC code. I intend to have a folder for each PoC and the solution file for the PoC under the same folder. Security Center: Security Center is a handy tool to secure your secret notes. 512-bits AES algorithm with your private master password is used to protect your data. SharePoint Metro Sliders: SharePoint 2010 Feature that includes two Metro style image sliders web parts: - Image slider with just one image - Image slider with four smaller images in it.SkilledRES_Portal: SkilledRES Portal consists about Organization Information & Activity Profiling of SkilledRESSuper BASE32: This awesome app let you convert all your music, pictures and video to brand new BASE32 encoding! System.Data.Entity.Repository.Filters: System.Data.Entity.Repository.FiltersThe Media Store: The Media StoreUser Group: Maintain support data for user groupsVivitap: Vivitap Samples and SDK support.Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2 - Prototype: Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2webass2: protoype project for the final project for WSCC .WebTechCoursework2.HRSystem: Human Resource System for Web Technology CourseworkWindows Store Application Library: Windows Store Application Library provides a collection of UI controls and utilities for Windows 8 store application developers.WriteMyName: Código para escrever o nome do autor no começo de código fonte.XMPP Chat for Windows 8 Apps Store: xmpp sample for windows app store???-Windows8: ???Windows8???,?????????

    Read the article

  • CodePlex Daily Summary for Wednesday, June 13, 2012

    CodePlex Daily Summary for Wednesday, June 13, 2012Popular ReleasesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v1.8: Installation guide: Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI Note: if previous module installations are detected, they are removed during upgrade. Note: PowerShell 3.0 RC is now supported. Note: Windows Server 2012 is partially supported (output formatting is not working for me). Release notes for version 1.8.0: Version 1.8 introduces a set of new .NET clas...Metodología General Ajustada - MGA: 02.07.02: Cambios Parmenio: Corrección para que se generen los objetivos en el formato PRO_F03. Se debe ejcutar el script en la BD. Cambios John: Soporte técnico telefónico y por correo electrónico. Integración de código fuente. Se ajustan los siguientes formatos para que no dupliquen los anexos al hacer clic dos veces en Actualizar: FORMATO ID 03 - ANALISIS DE PARTICIPANTES, FORMATO ID 05 - OBJETIVOS.Generación de instaladores: Conectado y Desconectado.BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsQTP FT Uninstaller: QTP FT Uninstaller v3: - KnowledgeInbox has made this tool open source - Converted to C# - Better scanning & cleaning mechanismWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?12????? ??? - ?????.net2.0?.net4.0????Winform?Web???????。 NET2 - .net 2.0?Winform?Web???? NET4 - .net 4.0?Winform?Web???? Libs - SDK ???,??2.0?4.0?SDK??? 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?Bui...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesNew Projects4SQ - Foursquare for WP7: This is a public Foursquare App, designed for the COMMUNITY for WP7.Alert Management Web Part: Business Problem: In a Project management site, When new project sites created, Project Manager creates alerts for all team members in multiple Lists/Libraries. Pain point is: He need to manually create alerts by going to each and every list/library >> Alert Me >> and then adding users to keep them informed. Yes, Its a pain for them to add users one by one in all of the lists and libraries, Solution: we started creating Custom Alert Management web part which will solve this problem. K...An Un-sql SQL Database Installer: D.E.M.O.N SQL Database Installer is an Un-SQL(SQL means boring here) Database server installer for many types of Database :MySQL, SQLite and others.Aqzgf: android for qzgfbaoming: ????????????,???dbentry.net 4.2??????。Comdaybe 2012_knockoutjs examples: This is a visual studio 2012 project containing all the demo's of #comdaybe 2012. more information on: http://www.communityday.be/ETExplorer: This application is a replacement for the standard windows explorer. When I began working with Windows 7 I noticed that some of the features I liked from XP were missing. After trying many freeware replacements and not finding any of them having all the features I needed, I decided to develop my own explorer replacement.ExamAnalysis: Exam Analysis Website (building...)FizzBuzz: FizzBuzzImproved Dnn Event Log Email Notification provider: The email notifications for log events in Dnn are poor. This project aims to improve them through a new logging provider. The initial work uses the existing logging provider and just overrides the SendLogNotifications method. Over time hopefully this will be improved to enhance the email sending capabilities of Dnn.InfoMap: a framework for mapsiTextSharp MPL for Silverlight & Windows Phone: This is a port of the iText Sharp (4.2) that was the last version published under the Mozilla Public License for Silverlight and Windows Phone. You can use this version in a commercial application and it does not require your app to be released as open source. You do not need any kind of license from anyone to use this unless you want to change the publishing information (where it sets iText as the publisher)ModifyMimeTypes: In Exchange 2007 and 2010, the Content-Type encoding for MIME attachments is determined by a list which is stored in the Exchange Organization container in Active Directory within the msExchMimeTypes property. The list can be easily viewed in the Exchange Management Shell using: •(Get-OrganizationConfig).MimeTypes The list is order specific. So if two entries exist for the same attachment type, the one earlier in the list will be the one that is used. One example of this is the encod...Monitor Reporting Services with PowerPivot: PowerPivot workbook with a connection to Reporting Service log. Several built in reports are already created in the workbook. You will need to connect to the execution log 2 view in your report server database. Typically the report server Database is named ReportServer. The view to query is named is ExecutionLog2. You may need to add a where clause to the query in the workbook to limit the data you want to see. I already have a where cluase that limits it to only rendered reports. The Date...QCats: QCats put hearts into socail relationship.Rad sa bazom: Ovo je samo proba, brsem brzoREJS: An easy to use, easy to implement Razor preparser for CSS and JS with support for caching, external data, LESS and much more.RexSharp: RexSharp is a relatively new web framework, that can easily be integrated with ASP.NET. It is based largely on the Websharper framework, written in F# - but with significant additions and optimizations. Like Websharper, RexSharp aims to make web and mobile app development a lot more fun, with F#. This is a preliminary release - detailed instructions for using the framework will be made available in a short while. ShareDev.Webparts: Coming soon...SharePoint Common Framework: SharepointCommon is a framework for Microsoft SharePoint 2010© Server and Foundation. It allows to map list items to simple hand-writen POCO classes and perform actions by manipulate with entities.Solution Extender for Microsoft Dynamics CRM 2011: Solution Extender makes it easier for Dynamics CRM 2011 integrators to export/import components that can't be included in solutions. You will be able to export/import Duplicate detection rules, Saved views and more to come It's developed in C#.System Center 2012 - Orchestrator Integration Packs and Utilities: System Center 2012 - Orchestrator Integration Packs and UtilitiesTalkBack: A small library to enable decoupled communication between different components.Tomson Scattering for GDT: Tomson is a software used on Gas Dynamic Trap setup in Institute of Nuclear Physics of Russian Academy of Sciences for electron temperature calculation in plasma based on tomson scattering method measurements.tony first project: first projectVirtual Card Table: This is a classroom project for explaining TDD.WellFound Yachts: WellFound Yachts

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • MVC 2 Entity Framework View Model Insert

    - by cannibalcorpse
    This is driving me crazy. Hopefully my question makes sense... I'm using MVC 2 and Entity Framework 1 and am trying to insert a new record with two navigation properties. I have a SQL table, Categories, that has a lookup table CategoryTypes and another self-referencing lookup CategoryParent. EF makes two nav properties on my Category model, one called Parent and another called CategoryType, both instances of their respective models. On my view that creates the new Category, I have two dropdowns, one for the CategoryType and another for the ParentCategory. When I try and insert the new Category WITHOUT the ParentCategory, which allows nulls, everything is fine. As soon as I add the ParentCategory, the insert fails, and oddly (or so I think) complains about the CategoryType in the form of this error: 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. When I step through, I can verifiy that both ID properties coming in on the action method parameter are correct. I can also verify that when I go to the db to get the CategoryType and ParentCategory with the ID's, the records are being pulled fine. Yet it fails on SaveChanges(). All that I can see is that my CategoryParent dropdownlistfor in my view, is somehow causing the insert to bomb. Please see my comments in my httpPost Create action method. My view model looks like this: public class EditModel { public Category MainCategory { get; set; } public IEnumerable<CategoryType> CategoryTypesList { get; set; } public IEnumerable<Category> ParentCategoriesList { get; set; } } My Create action methods look like this: // GET: /Categories/Create public ActionResult Create() { return View(new EditModel() { CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); } // POST: /Categories/Create [HttpPost] public ActionResult Create(Category mainCategory) { if (!ModelState.IsValid) return View(new EditModel() { MainCategory = mainCategory, CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); mainCategory.CategoryType = _db.CategoryTypeSet.First(ct => ct.Id == mainCategory.CategoryType.Id); // This db call DOES get the correct Category, but fails on _db.SaveChanges(). // Oddly the error is related to CategoryTypes and not Category. // Entities in 'DbEntities.CategorySet' participate in the 'FK_Categories_CategoryTypes' relationship. // 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. //mainCategory.Parent = _db.CategorySet.First(c => c.Id == mainCategory.Parent.Id); // If I just use the literal ID of the same Category, // AND comment out the CategoryParent dropdownlistfor in the view, all is fine. mainCategory.Parent = _db.CategorySet.First(c => c.Id == 2); _db.AddToCategorySet(mainCategory); _db.SaveChanges(); return RedirectToAction("Index"); } Here is my Create form on the view : <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div> <%= Html.LabelFor(model => model.MainCategory.Parent.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.Parent.Id, new SelectList(Model.ParentCategoriesList, "Id", "Name")) %> <%= Html.ValidationMessageFor(model => model.MainCategory.Parent.Id) %> </div> <div> <%= Html.LabelFor(model => model.MainCategory.CategoryType.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.CategoryType.Id, new SelectList(Model.CategoryTypesList, "Id", "Name"))%> <%= Html.ValidationMessageFor(model => model.MainCategory.CategoryType.Id)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Name) %> <%= Html.TextBoxFor(model => model.MainCategory.Name)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Name)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Description)%> <%= Html.TextAreaFor(model => model.MainCategory.Description)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Description)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.SeoName)%> <%= Html.TextBoxFor(model => model.MainCategory.SeoName, new { @class = "large" })%> <%= Html.ValidationMessageFor(model => model.MainCategory.SeoName)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.HasHomepage)%> <%= Html.CheckBoxFor(model => model.MainCategory.HasHomepage)%> <%= Html.ValidationMessageFor(model => model.MainCategory.HasHomepage)%> </div> <p><input type="submit" value="Create" /></p> </fieldset> <% } %> Maybe I've just been staying up too late playing with MVC 2? :) Please let me know if I'm not being clear enough.

    Read the article

  • Could not synchronize database state with session

    - by user359427
    Hello all, I'm having trouble trying to persist an entity which ID is a generated value. This entity (A), at persistence time, has to persist in cascade another entity(B). The relationship within A and B is OneToMany, and the property related in B is part of a composite key. I'm using Eclipse, JBOSS Runtime, JPA/Hibernate Here is my code: Entity A: @Entity public class Cambios implements Serializable { private static final long serialVersionUID = 1L; @SequenceGenerator(name="CAMBIOS_GEN",sequenceName="CAMBIOS_SEQ",allocationSize=1) @Id @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="CAMBIOS_GEN") @Column(name="ID_CAMBIO") private Long idCambio; //bi-directional many-to-one association to ObjetosCambio @OneToMany(cascade={CascadeType.PERSIST},mappedBy="cambios") private List<ObjetosCambio> objetosCambioList; public Cambios() { } ... } Entity B: @Entity @Table(name="OBJETOS_CAMBIO") public class ObjetosCambio implements Serializable { private static final long serialVersionUID = 1L; @EmbeddedId private ObjetosCambioPK id; //bi-directional many-to-one association to Cambios @ManyToOne @JoinColumn(name="ID_CAMBIO", insertable=false, updatable=false) private Cambios cambios; //bi-directional many-to-one association to Objetos @ManyToOne @JoinColumn(name="ID_OBJETO", insertable=false, updatable=false) private Objetos objetos; public ObjetosCambio() { } ... Entity B PK: @Embeddable public class ObjetosCambioPK implements Serializable { //default serial version id, required for serializable classes. private static final long serialVersionUID = 1L; @Column(name="ID_OBJETO") private Long idObjeto; @Column(name="ID_CAMBIO") private Long idCambio; public ObjetosCambioPK() { } Client: public String generarCambio(){ ServiceLocator serviceLocator = null; try { serviceLocator = serviceLocator.getInstance(); FachadaLocal tcLocal; tcLocal = (FachadaLocal)serviceLocator.getFacadeService("java:comp/env/negocio/Fachada"); Cambios cambio = new Cambios(); Iterator it = objetosLocal.iterator(); //OBJETOSLOCAL IS ALREADY POPULATED OUTSIDE OF THIS METHOD List<ObjetosCambio> ocList = new ArrayList(); while (it.hasNext()){ Objetos objeto = (Objetos)it.next(); ObjetosCambio objetosCambio = new ObjetosCambio(); objetosCambio.setCambios(cambio); //AT THIS TIME THIS "CAMBIO" DOES NOT HAVE ITS ID, ITS SUPPOSED TO BE GENERATED AT PERSISTENCE TIME ObjetosCambioPK ocPK = new ObjetosCambioPK(); ocPK.setIdObjeto(objeto.getIdObjeto()); objetosCambio.setId(ocPK); ocList.add(objetosCambio); } cambio.setObjetosCambioList(ocList); tcLocal.persistEntity(cambio); return "exito"; } catch (NamingException e) { // TODO e.printStackTrace(); } return null; } ERROR: 15:23:25,717 WARN [JDBCExceptionReporter] SQL Error: 1400, SQLState: 23000 15:23:25,717 ERROR [JDBCExceptionReporter] ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") 15:23:25,717 WARN [JDBCExceptionReporter] SQL Error: 1400, SQLState: 23000 15:23:25,717 ERROR [JDBCExceptionReporter] ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") 15:23:25,717 ERROR [AbstractFlushingEventListener] Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365) at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504) at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:101) at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:269) at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:89) at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:177) at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1423) at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:137) at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75) at org.jboss.aspects.tx.TxPolicy.endTransaction(TxPolicy.java:170) at org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:87) at org.jboss.aspects.tx.TxInterceptor$Required.invoke(TxInterceptor.java:190) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.aspects.tx.TxPropagationInterceptor.invoke(TxPropagationInterceptor.java:76) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.tx.NullInterceptor.invoke(NullInterceptor.java:42) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.security.Ejb3AuthenticationInterceptorv2.invoke(Ejb3AuthenticationInterceptorv2.java:186) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.ENCPropagationInterceptor.invoke(ENCPropagationInterceptor.java:41) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.BlockContainerShutdownInterceptor.invoke(BlockContainerShutdownInterceptor.java:67) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.aspects.currentinvocation.CurrentInvocationInterceptor.invoke(CurrentInvocationInterceptor.java:67) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:176) at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:216) at org.jboss.ejb3.proxy.impl.handler.session.SessionProxyInvocationHandlerBase.invoke(SessionProxyInvocationHandlerBase.java:207) at org.jboss.ejb3.proxy.impl.handler.session.SessionProxyInvocationHandlerBase.invoke(SessionProxyInvocationHandlerBase.java:164) at $Proxy298.persistEntity(Unknown Source) at backing.SolicitudCambio.generarCambio(SolicitudCambio.java:521) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:146) at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:92) at javax.faces.component.UICommand.broadcast(UICommand.java:332) at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:287) at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:401) at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:95) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:245) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:110) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:213) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:301) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Unknown Source) Caused by: java.sql.BatchUpdateException: ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") Thanks in advance! JM.-

    Read the article

  • Compass - Lucene Full text search. Structure and Best Practice.

    - by Rob
    Hi, I have played about with the tutorial and Compass itself for a bit now. I have just started to ramp up the use of it and have found that the performance slows drastically. I am certain that this is due to my mappings and the relationships that I have between entities and was looking for suggestions about how this should be best done. Also as a side question I wanted to know if a variable is in an @searchableComponent but is not defined as @searchable when the component object is pulled out of Compass results will you be able to access that variable? I have 3 main classes that I want to search on - Provider, Location and Activity. They are all inter-related - a Provider can have many locations and activites and has an address; A Location has 1 provider, many activities and an address; An activity has 1 provider and many locations. I have a join table between activity and Location called ActivityLocation that can later provider additional information about the relationship. My classes are mapped to compass as shown below for provider location activity and address. This works but gives a huge index and searches on it are comparatively slow, any advice would be great. Cheers, Rob @Searchable public class AbstractActivity extends LightEntity implements Serializable { /** * Serialisation ID */ private static final long serialVersionUID = 3445339493203407152L; @SearchableId (name="actID") private Integer activityId =-1; @SearchableComponent() private Provider provider; @SearchableComponent(prefix = "activity") private Category category; private String status; @SearchableProperty (name = "activityName") @SearchableMetaData (name = "activityshortactivityName") private String activityName; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "activityshortDescription") private String shortDescription; @SearchableProperty (name = "abRating") @SearchableMetaData (name = "activityabRating") private Integer abRating; private String contactName; private String phoneNumber; private String faxNumber; private String email; private String url; @SearchableProperty (name = "completed") @SearchableMetaData (name = "activitycompleted") private Boolean completed= false; @SearchableProperty (name = "isprivate") @SearchableMetaData (name = "activityisprivate") private Boolean isprivate= false; private Boolean subs= false; private Boolean newsfeed= true; private Set news = new HashSet(0); private Set ActivitySession = new HashSet(0); private Set payments = new HashSet(0); private Set userclub = new HashSet(0); private Set ActivityOpeningTimes = new HashSet(0); private Set Events = new HashSet(0); private User creator; private Set userInterested = new HashSet(0); boolean freeEdit = false; private Integer activityType =0; @SearchableComponent (maxDepth=2) private Set activityLocations = new HashSet(0); private Double userRating = -1.00; Getters and Setters .... @Searchable public class AbstractLocation extends LightEntity implements Serializable { /** * Serialisation ID */ private static final long serialVersionUID = 3445339493203407152L; @SearchableId (name="locationID") private Integer locationId; @SearchableComponent (prefix = "location") private Category category; @SearchableComponent (maxDepth=1) private Provider provider; @SearchableProperty (name = "status") @SearchableMetaData (name = "locationstatus") private String status; @SearchableProperty private String locationName; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "locationshortDescription") private String shortDescription; @SearchableProperty (name = "abRating") @SearchableMetaData (name = "locationabRating") private Integer abRating; private Integer boolUseProviderDetails; @SearchableProperty (name = "contactName") @SearchableMetaData (name = "locationcontactName") private String contactName; @SearchableComponent private Address address; @SearchableProperty (name = "phoneNumber") @SearchableMetaData (name = "locationphoneNumber") private String phoneNumber; @SearchableProperty (name = "faxNumber") @SearchableMetaData (name = "locationfaxNumber") private String faxNumber; @SearchableProperty (name = "email") @SearchableMetaData (name = "locationemail") private String email; @SearchableProperty (name = "url") @SearchableMetaData (name = "locationurl") private String url; @SearchableProperty (name = "completed") @SearchableMetaData (name = "locationcompleted") private Boolean completed= false; @SearchableProperty (name = "isprivate") @SearchableMetaData (name = "locationisprivate") private Boolean isprivate= false; @SearchableComponent private Set activityLocations = new HashSet(0); private Set LocationOpeningTimes = new HashSet(0); private Set events = new HashSet(0); @SearchableProperty (name = "adult_cost") @SearchableMetaData (name = "locationadult_cost") private String adult_cost =""; @SearchableProperty (name = "child_cost") @SearchableMetaData (name = "locationchild_cost") private String child_cost =""; @SearchableProperty (name = "family_cost") @SearchableMetaData (name = "locationfamily_cost") private String family_cost =""; private Double userRating = -1.00; private Set costs = new HashSet(0); private String cost_caveats =""; Getters and Setters .... @Searchable public class AbstractActivitylocations implements java.io.Serializable { /** * */ private static final long serialVersionUID = 1365110541466870626L; @SearchableId (name="id") private Integer id; @SearchableComponent private Activity activity; @SearchableComponent private Location location; Getters and Setters..... @Searchable public class AbstractProvider extends LightEntity implements Serializable { private static final long serialVersionUID = 3060354043429663058L; @SearchableId private Integer providerId = -1; @SearchableComponent (prefix = "provider") private Category category; @SearchableProperty (name = "businessName") @SearchableMetaData (name = "providerbusinessName") private String businessName; @SearchableProperty (name = "contactName") @SearchableMetaData (name = "providercontactName") private String contactName; @SearchableComponent private Address address; @SearchableProperty (name = "phoneNumber") @SearchableMetaData (name = "providerphoneNumber") private String phoneNumber; @SearchableProperty (name = "faxNumber") @SearchableMetaData (name = "providerfaxNumber") private String faxNumber; @SearchableProperty (name = "email") @SearchableMetaData (name = "provideremail") private String email; @SearchableProperty (name = "url") @SearchableMetaData (name = "providerurl") private String url; @SearchableProperty (name = "status") @SearchableMetaData (name = "providerstatus") private String status; @SearchableProperty (name = "notes") @SearchableMetaData (name = "providernotes") private String notes; @SearchableProperty (name = "shortDescription") @SearchableMetaData (name = "providershortDescription") private String shortDescription; private Boolean completed = false; private Boolean isprivate = false; private Double userRating = -1.00; private Integer ABRating = 1; @SearchableComponent private Set locations = new HashSet(0); @SearchableComponent private Set activities = new HashSet(0); private Set ProviderOpeningTimes = new HashSet(0); private User creator; boolean freeEdit = false; Getters and Setters... Thanks for reading!! Rob

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • You Might Be a DBA

    - by BuckWoody
    With all apologies to Jeff Foxworthy, I was up late Friday night on a holiday weekend (which translated into T-SQL becomes “Maintenance Window”) and I got bored in between the two or three minutes I had between clicks. So I started a “Twitter” meme – and it just took off. I haven’t cleaned these up much, but here, in author order as of Saturday the 29th of May is the list “You might be a DBA” from around the Twitterverse: buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have children older than the rest of your team #youmightbeaDBA  buckwoody You and the Oracle DBA would kill each other, but you'll happily fight off a developer together first #youmightbeaDBA  buckwoody You've threatened to quit if they give anyone the sa password on production #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You've sent a vendor suggestions on improving their database design or code (and been ignored) #youmightbeaDBA  buckwoody You have an opinion on the best code-beautifier #youmightbeaDBA  buckwoody You have at least one set of scripts you won't share #youmightbeaDBA  buckwoody You refer to co-workers as "carbon-units" #youmightbeaDBA  buckwoody Being paranoid is on your resume at the top #youmightbeaDBA  buckwoody Everyone comes to your cube to find the MSDN DVD's #youmightbeaDBA  buckwoody You always plan an exit strategy, even when entering a McDonald's #youmightbeaDBA  buckwoody You've worn down developers to get your way by explaining normalization levels #youmightbeaDBA  buckwoody You refer to clothes as "Data Abstractions" #youmightbeaDBA  buckwoody Users pester you to be able to put data in a database, then they pester you to take it out and put it in Excel #youmightbeaDBA  buckwoody Others try to de-duplicate data, you try to copy it to more than three locations #youmightbeaDBA  buckwoody You have at least one DLT tape in the trunk of your car #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody Your spouse knows what "ETL" means #youmightbeaDBA  buckwoody You've referred to yourself as the "Data Janitor" #youmightbeaDBA  buckwoody You don't have positive connotations of the word "upgrade" #youmightbeaDBA  buckwoody You get your coffee before you check your servers, because you know you won't get any if you don't #youmightbeaDBA  buckwoody You always come to work through the back door so no one hijacks you on the way to your cube #youmightbeaDBA  buckwoody You check your server logs before you check your e-mail in the morning so you can reply "Yeah, I already fixed that." #youmightbeaDBA  buckwoody You have more conference badges than clean socks #youmightbeaDBA  buckwoody Your coffee mug says "It depends" #youmightbeaDBA  buckwoody You can convince a boss that you need 16GB of RAM in your laptop #youmightbeaDBA  buckwoody You've used ebay to find production equipment #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You know when your company is acquiring another even before the CFO #youmightbeaDBA  buckwoody You pad all project timelines by 2X, and you still miss them #youmightbeaDBA  buckwoody You call aspirin "work vitamins" #youmightbeaDBA  buckwoody You get the same amount of sleep even after you have a child #youmightbeaDBA  buckwoody You obsess about performance metrics from over one year ago #youmightbeaDBA  buckwoody The first thing you buy after the database software is aftermarket tools to manage the database software #youmightbeaDBA  buckwoody You've tried to convince someone else to become a DBA #youmightbeaDBA  buckwoody You use twitter and facebook to talk with colleagues because there's no one else in your company that does what you do #youmightbeaDBA  buckwoody You only know other DBA's by their Tweet Handle #youmightbeaDBA  buckwoody You've explained the difference between 32 and 64-bit to more than one manager in terms they can understand, using puppets #youmightbeaDBA  buckwoody Your two main enemies are developers and SAN admins #youmightbeaDBA  buckwoody You've driven to the Datacenter to install SQL Server because "you don't trust those NOC admins" #youmightbeaDBA  buckwoody You pay more for faster Internet connections than cable at home so you don't have to drive in #youmightbeaDBA  buckwoody You call texting a "queuing system" #youmightbeaDBA  buckwoody You know that if someone can read Perl, they manage an Oracle system #youmightbeaDBA  buckwoody You have an e-mail rule for backup notifications #youmightbeaDBA  buckwoody Your food pyramid includes coffee, salt and fat #youmightbeaDBA  buckwoody You wish everything had a graphical query plan #youmightbeaDBA  buckwoody You refactor your e-mails #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody You've written blog posts on technology you've never actually implemented in production #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody @MidnightDBA Click the #youmightbeaDBA tag. I've had WAY too much coffee today.  buckwoody There is no other position that is 1-deep except you and the CEO #youmightbeaDBA  buckwoody When you watch "The Office" you call it "OJT" #youmightbeaDBA  buckwoody You would pay money for a license plate that has the letters S-Q-L together #youmightbeaDBA  buckwoody Your blog would make a "best practices" or "worst practices" book #youmightbeaDBA  buckwoody You have actually considered making a RAID array from thumb drives #youmightbeaDBA  buckwoody The first thing you install on your netbook is SSMS #youmightbeaDBA  buckwoody Everything on your laptop is installed from your MSDN subscription #youmightbeaDBA  buckwoody Your watch is set to UTC because it's just easier #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody * == bad #youmightbeaDBA  buckwoody There are just as many females in your technical field as males #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody You refer to time as a "Clustered Index" #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You make plenty of money, but you're excited to get a $2.00 squeeze-ball from Quest and Redgate #youmightbeaDBA  buckwoody You can't explain to your family what you really do for a living #youmightbeaDBA  buckwoody You tell people that you made a database query go faster, and expect them to be happy for you #youmightbeaDBA  buckwoody You think a millisecond is a really long time #youmightbeaDBA  buckwoody You're sitting and typing #youmightbeaDBA when you could be outside #youmightbeaDBA  buckwoody You can't wait for a technical conference so you can wear a kilt - and you're not Scottish #youmightbeaDBA  buckwoody You know that "DBA" stands for "Default Blame Acceptor" #youmightbeaDBA  buckwoody People can use Access as a cross or garlic on you #youmightbeaDBA  buckwoody You know what "the truth, thole truth and nothing but the truth, so help me Codd" means #youmightbeaDBA  buckwoody You've gotten more help from twitter and facebook than all your years in college #youmightbeaDBA  buckwoody You can't talk fast enough to get a concept out of your head so you tweet it instead #youmightbeaDBA  buckwoody You cry when someone doesn't use a WHERE clause #youmightbeaDBA  buckwoody You think data can be represented as something OTHER than XML #youmightbeaDBA  buckwoody You think "Set theory" is not an verb but a noun #youmightbeaDBA  buckwoody You try to convince random strangers to vote on your Connect item #youmightbeaDBA  buckwoody You think 3 hours of contiguous sleep is a good thing #youmightbeaDBA or #youmightbeamother  buckwoody You don't like Oracle, and not just because of what she did to Neo #youmightbeaDBA  buckwoody You know when to say "sequel" and "s-q-l" #youmightbeaDBA  buckwoody You know where the data is #youmightbeaDBA  buckwoody You refer to your children as "Fully Redundant Mirrors" #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody Your laptop is more powerful than the servers in most companies - including your own #youmightbeaDBA  buckwoody You capitalize SELECTed words #youmightbeaDBA  buckwoody You take the word "NoSQL" as a personal attack #youmightbeaDBA  buckwoody You know why "user" refers to both business people and crack addicts #youmightbeaDBA  buckwoody You cringe in public when the word "upgrade" is used in a sentence #youmightbeaDBA  buckwoody Holiday == "Maintenance Window" #youmightbeaDBA  buckwoody All Data Is MetaData means something to you #youmightbeaDBA  buckwoody You've never seen the driveway to your house in the daylight #youmightbeaDBA  buckwoody You think that something OTHER than the database might be the performance bottleneck #youmightbeaDBA  buckwoody Most of your bloodstream is composed of caffeine #youmightbeaDBA  buckwoody Your task list is labeled "CRUD Matrix" #youmightbeaDBA  buckwoody You call your wife/husband a "Linked Server" #youmightbeaDBA  anonythemouse When someone tells you they are going to take a dump and you wonder of which database then #youmightbeaDBA  anonythemouse When it's 11pm on a holiday weekend and you are working #youmightbeaDBA  anonythemouse When you sit down at a table and look for it's primary key #youmightbeaDBA  anonythemouse When getting milk from the fridge you check the expiry date is > getdate() #youmightbeaDBA  blakmk when you wake up dreaming about sql #youmightbeaDBA  CharlesGarver You think a @buckwoody bobblehead would be a cool thing to have on the dashboard of your car #youmightbeaDBA  CharlesGarver Your friends don't understand why you think there's a difference between single and double quotes #youmightbeaDBA  CharlesGarver Even the newest employees know your name from all the downtime notices you've sent out #youmightbeaDBA  CharlesGarver You sometimes feel anxious and think "I should test restoring those backups" and then the feeling passes #youmightbeadba  CharlesGarver You know what a co-worker means when they ask "how is your squirrel server?" #youmightbeadba  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver You're willing to move someone's job up in priority for a box of #voodoodonuts #youmightbeaDBA  CharlesGarver Each person in your company seems to think you work for THEM #youmightbeaDBA  CharlesGarver You have a Love/Hate relationship going on with #Microsoft #youmightbeaDBA  CharlesGarver People ask you to troubleshoot their Access program #youmightbeaDBA  CharlesGarver The first words you hear in the morning are 'your voicemail box is full' #youmightbeaDBA  CharlesGarver The thought of disrupting 500 people's work so you can do something doesn't phase you #youmightbeaDBA  CharlesGarver You can't sleep at night and you ponder the logisitcs of collecting every copy of Access for the world's biggest bonfire #youmightbeaDBA  CharlesGarver Your home computer is backed up in 3 different places #youmightbeaDBA  CharlesGarver Your wardrobe for work includes pajamas #youmightbeaDBA  CharlesGarver Someone tells you to look in the INDEX and you look puzzled before finally going to the back of the book. #youmightbeaDBA  chuckboycejr If you have ever set up a SQLAgent job to email your mobile phone to serve as an alarm clock #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd rather meet Itzik than Jay Z #youmightbeaDBA  chuckboycejr If you'd wrestle a SysAdmin to the ground to implement #DPA best practices as per @aspiringgeek #youmightbeaDBA  databaseguy I need to be up in 7 hours, so I'm off to bed! I'll have to read the rest of @buckwoody's #youmightbeaDBA posts in the AM. (g'night Buck!)  databaseguy When people ask you about your house, the first thing you describe is the network. #youmightbeaDBA  databaseguy The last thing you say at the office each day is, "is anybody else here? I'm shutting off the lights!" #youmightbeaDBA  databaseguy Your blood pressure rises when you read application specs drafted by marketing. #youmightbeaDBA  databaseguy A good day at work is one when nobody pays you no mind. #youmightbeaDBA  databaseguy You care about latches and wait states. #youmightbeaDBA  databaseguy You have worked over 200 hours on a performance tuning project that required no application changes at all. #youmightbeaDBA  databaseguy The late-night security guard knows the names of your spouse and kids. #youmightbeaDBA  databaseguy You have had vigorous debates about whether it should be pronounced "sequel" or "ess-queue-ell". #youmightbeaDBA  databaseguy You have VPN and RDP software installed on your phone ... just in case. #youmightbeaDBA  databaseguy You have edited a data file by hand, just to see what would happen. #youmightbeaDBA  databaseguy You decorate your office walls with database catalog posters. #youmightbeaDBA  databaseguy You've built programs that access data just to keep other developers from asking you to run queries all the time. #youmightbeaDBA  databaseguy When you watch movies like The Matrix, you find yourself calculating the fasibility of storing all that data. #youmightbeaDBA  databaseguy You have tried to convince someone to spend money on an SSD storage array. #youmightbeaDBA  databaseguy When CPU is spiked on a server, you want to gather forensic evidence. #youmightbeaDBA  databaseguy You have to remind developers not to push code to production without checking if the database is ready. #youmightbeaDBA  databaseguy Nobody cares what you wear to work, as long as the thing keeps running. #youmightbeaDBA  databaseguy Telepathy is a job requirement when working with app dev teams. #youmightbeaDBA  databaseguy You read database statistics for the educational value. #youmightbeaDBA  databaseguy And your boss freely admits this to anyone within earshot. #youmightbeaDBA  databaseguy Your boss cannot explain or understand what you do. #youmightbeaDBA  databaseguy You envision ERDs when you see a GUI. #youmightbeaDBA  databaseguy You say things like "applications come and go, but data lasts forever." #youmightbeaDBA  databaseguy You have memorized the names of several of the AdventureWorks employees. #youmightbeaDBA  databaseguy You know what MAXDOP setting you can get away with for a big query based on current server load. #youmightbeaDBA  databaseguy And you immediately recognize the recursion in my last tweet. #youmightbeaDBA  databaseguy You find 50 simultaneous tweets from @buckwoody about #youmightbeaDBA :O)  DBAishness You have "funny stories" about the times your developers accidentally deleted the T-log in their test environment. #youmightbeaDBA  DBAishness Planning to slice and dice your MDW data with PowerPivot makes you giggle like a schoolgirl. #youmightbeaDBA  donalddotfarmer You think @buckwoody lives in the "real world." #youmightbeaDBA  jamach09 @buckwoody #youmightbeaDBA Why go outside when you can sit in the nice cool server room?  jamach09 If you refer to procreation as "Replication", #youmightbeaDBA.  jamach09 If you think ORM is a four-letter word, #youmightbeaDBA  JamesMarsh If you have ever preached the value of Source Code Control, #YouMightBeADBA  jethrocarr @venzann You store your shopping list in a ACID compliant DB #youmightbeaDBA  joe_positive @buckwoody thought it stood for "Don't Bother Asking" #youmightbeaDBA  joe_positive when you check your IT Events Calendar before making weekend plans #youmightbeaDBA  LadyRuna You cringe whenever someone calls Excel a database #youmightbeaDBA  LadyRuna When the waiter says he'll be your server today, you ask how many terabytes he is #youmightbeaDBA  LadyRuna you always call the asterisk a "Star" #youmightbeaDBA  LadyRuna You walk into a server room, say "Nice RACK!" and everyone there knows you're talking about server rack... #youmightbeaDBA  LadyRuna You receive more messages from servers than from friends #youmightbeaDBA  LadyRuna hmmm... #youmightbeaDBA if your recipe for gumbo is "SELECT * FROM Refrigerator"  markjholmes @SQLSoldier Heh. #youmightbeaDBA if you correct other DBAs' spelling of @PaulRandal  markjholmes #youmightbeaDBA if you actually test RAID5 vs RAID10 on your SAN because when it comes to configuration, "it depends."  markjholmes #youmightbeaDBA if you have at least 3 definitions of the word "cluster"  MarlonRibunal 3 Words: @BrentO, snicker, & Access #youmightbeaDBA  MarlonRibunal @onpnt @mikeSQL my appeal was a couple of mins late. Enjoying #youmightbeaDBA  MarlonRibunal @mikeSQL @onpnt pls, don't mention bacon #youmightbeaDBA  merv @buckwoody You HATE 3-way joins #youmightbeaDBA  MidnightDBA If you're up at midnight Tweeting about SQL #youmightbeaDBA  MidnightDBA @buckwoody I'd noticed that. :) #youmightbeaDBA  mikeSQL when people talk about "their type" you're thinking varchar, bigint, binary, etc #youmightbeadba  mikeSQL people ask you to go to lunch , but you can't go because you're attending #SQLlunch #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mikeSQL you laugh for hours at all of the #sqlmoviequotes ....things in which a normal individual would scratch their head at. #youmightbeadba  mrdenny If you think that @buckwoody's demo using PowerPivot to analyze index usage data from DMVs is awesome then #youmightbeaDBA  mrdenny You wish @PaulRandal still worked at Microsoft so that they would make a bobble head of him #youmightbeadba  mrdenny When it's 11pm on a holiday weekend, and your posting stupid jokes on Twitter then #youmightbeadba  mrdenny If you go out with friends and wonder why no one's wearing a kilt then #YouMightBeADBA  mrdenny You can't do basic math, but you know off the top of your head how many CALs $14,412 can buy you. #YoumightbeaDBA  mrdenny If you've ever setup a SQL Job to email you to get you out of a regularly scheduled meeting #YouMightBeADBA.  mrdenny You throw up in your mouth a little when ever you here the word "Access". Even if it doesn't relate to a MS product. #YouMightBeADBA  msdtjones You spend more time listening to @buckwoody than your wife #youmightbeaDBA  NFDotCom You perform "hail deltas" on a regular basis. #YouMightBeADBA  NoelMcKinney If you tell your wife you want to go to Columbus Ohio for your wedding anniversary so you can attend #sqlsat42 then #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney You read a union is on strike and wonder if it's a UNION ALL #youmightbeaDBA  NoelMcKinney Someone asks you to throw another log on the fire and you tell them not to worry about it because Autogrowth is turned on #youmightbeaDBA  Nuurdygirl Even if you have a girlfriend...its possible #youmightbeadba. Yeah-i said its possible!  Nuurdygirl When your girlfriend has to lean around the laptop to kiss you goodnight #youmightbeadba  Old_Man_Fish If you worry about how big your package is and how long it takes to finish #youmightbeaDBA  Old_Man_Fish If you no longer wonder if someone is in trouble or died if you are getting calls at 2AM #youmightbeaDBA  Old_Man_Fish If, when you hear the word ACCESS with no connotation you blood pressure jumps 50 points, #youmightbeaDBA  onpnt When you hear the word inject you immediately get concerned if your databases are OK #youmightbeaDBA  onpnt Your servers haven't been rebooted in a year #youmightbeaDBA  onpnt You know why it's funny when @PaulRandal has the word, "Sheep" in a tweet #youmightbeaDBA  onpnt You have read BOL without actually having a problem to figure out #youmightbeaDBA  onpnt You can type "SELECT columns FROM tables" without typos but tipen ni Banglish ares a messis #youmightbeaDBA  onpnt DR strategies doesn't include the word, RAID in them #youmightbeaDBA  onpnt you can move a SQL Server instance to a new server without the users ever knowing #youmightbeaDBA  onpnt You have made an SSIS package that is more than one step #youmightbeaDBA  onpnt You have the balls to say no to your boss when they ask for the sa password #youmightbeaDBA  onpnt you google to trouble shoot a problem and end up at your own blog (and it fixes it) #youmightbeaDBA  onpnt You talk your wife into moving the family vacation a week earlier so you can attend the areas local SSUG meeting #youmightbeaDBA  onpnt you can explain to a nontechnical person what a deadlock is #youmightbeaDBA  onpnt You hope a girl asks you what your collation is #youmightbeaDBA  onpnt you make jokes that include the words shrink, truncate and 1205. And you are the only one that laughs at them #youmightbeaDBA  onpnt You rate your ability to stay awake to work longer on blogs, twitter, forums and your day to day job with the 5 9's goal #youmightbeaDBA  onpnt you have major surgery and beg the doctor to release you back to work 5 days later because you miss your servers #youmightbeaDBA #TrueStory  onpnt You do have backups and you know how to use them #youmightbeaDBA  onpnt It's the network #youmightbeaDBA  onpnt When the developers get to work your mood changes rapidly #youmightbeaDBA  onpnt When someone says, "PASS", you first think of karaoke #youmightbeaDBA  onpnt Recruiters try to get you to call them *just* because they think you'll give them @BrentO contact info #youmightbeaDBA  onpnt You chuckle every time you go to grab the "CLR" Calcium, Lime and Rust Remover to clean something #youmightbeaDBA  onpnt @MarlonRibunal @mikeSQL Sorry man, it was already in motion ;-) #youmightbeaDBA  onpnt When you have an "I love bacon" sticker on your laptop. #youmightbeaDBA http://twitpic.com/1ry671  onpnt You sing SELECT statements in the shower #youmightbeaDBA  onpnt When you see a chicken it doesn't remind you of food. It reminds you of a guy named Jorge #youmightbeaDBA  onpnt At time, SQL is your mistress #youmightbeaDBA  onpnt Your wife wonders if SQL is the code name of your mistress at times #youmightbeaDBA  onpnt it's Friday and you are on twitter thinking really hard about what would be funny for hash tag #youmightbeaDBA  onpnt You organize your wife's "decorative"pillows on the bed in a B-Tree structure #youmightbeaDBA  PaulWhiteNZ If you: SELECT TOP (1) milk FROM fridge WHERE use_by_date >= GET_DATE() ORDER BY use_by_date ASC #YouMightBeaDBA  RonDBA #youmightbeaDBA if you read @buckwoody's and @BrentO's blogs.  ryaneastabrook @buckwoody omg, you have to stand up a website with these on them, they are awesome #youmightbeaDBA  soulvy @StrateSQL @LadyRuna Or a "Splat" #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer You retweet @buckwoody on a Friday night #youmightbeaDBA  speedracer You can still fall asleep after three cups of coffee #youmightbeaDBA  speedracer Developers make you twitch #youmightbeaDBA  sqlagentman You know what X/1024*8 is. #YouMightBeADBA  SqlAsylum Your still in the office at 5:00 on memorial day weekend. #youmightbeadba :)  SQLBob Whenever someone you know gets pregnant you bring up INNER JOINs or SQL Injection attacks... #youmightbeaDBA  SQLChicken You know one or more SQL folks in the community with an animal in their username #youmightbeaDBA  SQLChicken You've used one or more car analogies to explain how a database works #youmightbeaDBA  SQLChicken “@sqljoe: #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you” lmao nice!  SQLChicken When talking about SSIS your discussions break down into various jokes about packages #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLChicken Just SEEING the code for cursors makes you break out in hives #youmightbeaDBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman You really own a "Stick of Much Developer Whacking" #YouMightBeADBA  SQLCraftsman You coined the phrase "Magic SAN Dust" because calling a vendor's marketing claims BS is not acceptable in a meeting. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  SQLCraftsman Default Blame Acceptor #YouMightBeADBA  SQLCraftsman If you hear about a new feature with the acronym "DAC" and wonder what disaster of a feature it is attached to this time. #YouMightBeADBA  sqljoe #youmightbeaDBA if you wished your wife knew T-sql. USE ShoppingList SELECT NecessaryItems from Supermarket WHERE Category<> ("junk food")  sqljoe #youmightbeaDBA if the first thing you kiss when you wake up is your mobile for not waking you up in the middle of the night  sqljoe #youmightbeaDBA if your wife has a "Do Not Fly" family vacation list of her own including your laptop and mobile  sqljoe #youmightbeaDBA if you have researched for DBA Anonymous groups and attended a #SSUG willing to drop your database (vice)  sqljoe #youmightbeaDBA if your only maintenance windows are staff meetings  sqljoe #youmightbeaDBA if you think of yourself as "The One" in The Matrix "balancing the equation" from The Architect's (developers) poor coding  sqljoe #youmightbeaDBA if you think @PaulRandal should have played the Oracle in The Matrix  sqljoe #youmightbeaDBA if home CD & Movie collection is stored in secured containers,in logical order & naming convention,and with a backup copy  sqljoe #youmightbeaDBA if you applied to attend #sqlu and requested @SQLChicken to pull strings for you  sqljoe #youmightbeaDBA if you have tried to TiVo @MidnightDBA broadcasts  sqljoe #youmightbeaDBA if your #sql user group feels like #AA meetings  sqljoe #youmightbeaDBA if you thought of bringing your #sql books to #sqlsaturday and #sqlpass for autographs  sqljoe #youmightbeaDBA if #sqlpass feels like the #oscars  sqljoe #youmightbeaDBA if you are proud of your small package  SQLLawman #youmightbeaDBA when you hear MDX and Acura is not first thought that comes to mind.  sqlrunner If your wife double checks that there isn't a SQLSat within 200 miles of your vacation destination #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner When you're on a conference call and your wife thinks your speaking in a foreign language #youmightbeaDBA  sqlrunner You treat the word 'access' as a verb, not a noun #youmightbeaDBA  sqlrunner If you are happy with sub-second performance #youmightbeaDBA  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner When you know the names of the NOC people AND their families #youmightbeadba  sqlrunner Your company set's up international phone coverage for your cruise #youmightbeaDBA  sqlsamson @buckwoody if your manager asks you for data and you respond with "there's a script for that" #youmightbeadba  sqlsamson @buckwoody If you receive more messages from your server then your spouse #youmightbeadba  SQLSoldier You've spent all night Valentines Day upgrading the SQL Servers and forgot to tell your wife you'd be working late. #youmightbeadba  SQLSoldier You're flattered when someone calls you a geek. #youmightbeadba  SQLSoldier @llangit @mrdenny it's 11pm on a holiday weekend, & your reading stupid jokes on Twitter then #youmightbeadba  SQLSoldier Your manager borrows lunch money from you because your salary is 30% higher than his. #youmightbeaDBA  SQLSoldier You think "intellisense" is a double negative because it's not intelligent nor makes sense. #youmightbeaDBA  SQLSoldier 75% of the emails you receive at home have the phrase "now following you on Twitter!" in the subject line. #youmightbeaDBA  SQLSoldier You petition Ken Burns to remake Office Space because it should have been 18 hours long. #youmightbeaDBA  SQLSoldier You select a candidate for a Jr DBA position because his resume said he's willing to get your coffee. #youmightbeaDBA  SQLSoldier Somebody misquotes @PaulRandall and you call him on your cell to verify. #youmightbeaDBA  SQLSoldier You wish the elevator in your building was slower because it's the last time you'll be left alone all day. #youmightbeaDBA  SQLSoldier The developers sacrifice small animals before giving you their code for review. #youmightbeaDBA  SQLSoldier Developers bring you coffee and a BLT when you review their code. #youmightbeaDBA #IWish  SQLSoldier You can get out of any family get-together by saying you have to work and nobody questions it. #youmightbeaDBA  SQLSoldier You've requested a HP Superdome for you "test" box. #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier A dev. asks if you've heard about some great new feature in SQL and you show the 16 blog posts you wrote on it ... last year #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA #TrueStory  SQLSoldier The new CEO asks you to justify your salary, so you go on vacation for 2 weeks. And he never questions you again. #youmightbeaDBA  SQLSoldier Your dev team is still testing SQL 2008 and you're already planning for SQL 11. #youmightbeaDBA  SQLSoldier You use a cell phone service coverage map to plan your next vacation. #youmightbeaDBA  SQLSoldier You come in to work at 7 AM because it gives you at least 3 hours without any developers around. #youmightbeaDBA  SQLSoldier You figure out a way to make take your wife on a cruise and deduct it as a business expense. #youmightbeaDBA #sqlcruise  SQLSoldier You name your cat SQLDog because the name @SQLCat was already taken. #youmightbeaDBA  SQLSoldier You rate your blog posts based on the number of retweets you get. #youmightbeaDBA  SQLSoldier You disable random logins just to mess with people. #youmightbeaDBA  SQLSoldier You fall for the pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You can blame an outage on anyone in the company because you're the only one that knows how to find out what really happened #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your leave work early because your internet connection to the data center is better at home #youmightbeaDBA  SQLSoldier You cheer when Milton burns down the company in Office Space #youmightbeaDBA  SQLSoldier Your think the 4 food groups are coffee, bacon, fast food, and Mountain Dew. #youmightbeaDBA  SQLSoldier You tell someone your job title and they ask "What?" You describe it and they ask "What?". So you say "computer geek". #youmightbeaDBA  SQLSoldier The #1 referrer to your blog is Twitter.com. #youmightbeaDBA  SQLSoldier Your idea of a good time on a Saturday involves free training. #youmightbeaDBA #sqlsat43  SQLSoldier You write a book that all of your co-workers have and none have read it. #youmightbeaDBA  SQLSoldier You write a book that sells a couple thousand copies and is heralded a best seller. #youmightbeaDBA  SQLSoldier No matter how sick you are, you go to work if it's time to pass the pager on to the next guy. #youmightbeaDBA #TrueStory  SQLSoldier You go out on the town, and strangers walk up to you and say, "Hey you're that SQL guy" #youmightbeaDBA #TrueStory  SQLSoldier Your wife asks you to fix something, and you request a downtime window. #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier Your best pickup line, "Hey baby, what's your collation?" #youmightbeaDBA  SQLSoldier Your wife asks when you'll be home, and you tell her that you wish you knew. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You list TSQL as your native language on the 2010 census. #youmightbeaDBA  SQLSoldier Starbucks' stock price drops every time you go on vacation. #youmightbeaDBA  SQLSoldier You're happy when the web master says that the website is down. #youmightbeaDBA  SQLSoldier You know that @BuckWoody is not someone's porno name. #youmightbeaDBA  SQLSoldier You get mad when someone calls your car a "heap" because you've always considered it to be a "clustered index". #youmightbeaDBA  SQLSoldier Your blog has more hits than your company's website. #youmightbeaDBA  SQLSoldier You systematically remove the asterisk key from all keyboards in the company except yours. #youmightbeaDBA  SQLSoldier When asked if you recycle, you reply that you run sp_cycle_errorlog every night at midnight #youmightbeaDBA  SQLSoldier You wouldn't allow someone named @AdamMachanic to work on your car. #youmightbeaDBA  SQLSoldier You switch offices every 3 days to avoid developers #youmightbeaDBA  SQLSoldier PSS has your number on speed dial. #youmightbeaDBA  SQLSoldier You frown when you they tell Neo that he's going to the Oracle #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  swhaley you regretted saying "This shouldn't effect production" #youmightbeaDBA  Tarwn A pleasurable saturday means spending the day learning more about what you already do the rest of the week #youmightbeaDBA ...oh, wait...  thelostforum For great justice; all our base are belong to YOU !! #youmightbeadba  thelostforum @SQLSoldier: You need a witness to use a mirror #youmightbeaDBA ;)  TimCost you capitalize key words. always. everywhere. you can't help it, usually don't even notice. #youmightbeaDBA  Toshana Your the only one in your company not impressed with the developers new application. #youmightbeaDBA  venzann Coming soon from a (respected) book publisher - @buckwoody's #youmightbeaDBA  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann I love the #youmightbeaDBA tag. Found at least 6 new DBAs to follow..  venzann He's on a role tonight. @buckwoody is summing up my life with his #youmightbeaDBA tweets...  venzann You use #sqlhelp as a primary resource during troubleshooting #youmightbeaDBA  venzann You insist on stricter password security for your sql servers than you implement on your own laptop #youmightbeaDBA  WesBrownSQL @buckwoody you are up so late the only tweets you see are from @buckwoody #youmightbeaDBA  WesBrownSQL @SQLSoldier you are upgrading all your 2005 prod servers to 2008 R2 on a three day weekend... #youmightbeaDBA  zippy1981 #youmightbeaDBA if everytime you do something with #mongodb you think of the Vulcan proverb "only Nixon could go to China."  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

< Previous Page | 105 106 107 108 109 110  | Next Page >