Search Results

Search found 2753 results on 111 pages for 'scott smith'.

Page 101/111 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • HTTP post using php and curl failing

    - by user2916484
    I am trying to send an XML file to an external system. I am using the below code for doing so, which I got over the internet. But I observed that when i put an echo on the xml variable, it does not show me the XML as a string, but it is parsing the xml and showing me the values. Same is happening when I am sending this to external system. Which is failing. Can you please tell me a way, in which the XML is not parsed and I can send the entire XML text as a string to external system? My code is below. <?php $xml = '<?xml version=\"1.0\" encoding="UTF-8" ?><shiporder><orderID>1234</orderID> <orderperson>John Smith</orderperson></shiporder>'; $xml2 = ''; $headers = array( "Content-type: text/xml", "Content-length: " . strlen($xml), "Connection: close", ); // give the path of the Third party site $url = "http://<server>:<port>/XMII/Illuminator?service=WSMessageListener&mode=WSMessageListenerServer&NAME=shiporder&IllumLoginName=<name>&IllumLoginPassword=<pswd>"; echo $xml; echo $url; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); //curl_setopt($ch, CURLOPT_MUTE, 1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_POSTFIELDS, $xml); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); echo $output; curl_close($ch); ?> Regards, Gita

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

  • Visual Studio App.config XML Transformation

    - by João Angelo
    Visual Studio 2010 introduced a much-anticipated feature, Web configuration transformations. This feature allows to configure a web application project to transform the web.config file during deployment based on the current build configuration (Debug, Release, etc). If you haven’t already tried it there is a nice step-by-step introduction post to XML transformations on the Visual Web Developer Team Blog and for a quick reference on the supported syntax you have this MSDN entry. Unfortunately there are some bad news, this new feature is specific to web application projects since it resides in the Web Publishing Pipeline (WPP) and therefore is not officially supported in other project types like such as a Windows applications. The keyword here is officially because Vishal Joshi has a nice blog post on how to extend it’s support to app.config transformations. However, the proposed workaround requires that the build action for the app.config file be changed to Content instead of the default None. Also from the comments to the said post it also seems that the workaround will not work for a ClickOnce deployment. Working around this I tried to remove the build action change requirement and at the same time add ClickOnce support. This effort resulted in a single MSBuild project file (AppConfig.Transformation.targets) available for download from GitHub. It integrates itself in the build process so in order to add app.config transformation support to an existing Windows Application Project you just need to import this targets file after all the other import directives that already exist in the *.csproj file. Before – Without App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> After – With App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="C:\MyExtensions\AppConfig.Transformation.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> As a final disclaimer, the testing time was limited so any problem that you find let me know. The MSBuild project invokes the mage tool so the Framework SDK must be installed. Update: I finally had some spare time and was able to check the problem reported by Geoff Smith and believe the problem is solved. The Publish command inside Visual Studio triggers a build workflow different than through MSBuild command line and this was causing problems. I posted a new version in GitHub that should now support ClickOnce deployment with app.config tranformation from within Visual Studio and MSBuild command line. Also here is a link for the sample application used to test the new version using the Publish command with the install location set to be from a CD-ROM or DVD-ROM and selected that the application will not check for updates. Thanks to Geoff for spotting the problem.

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • SQLAuthority News – Professional Development and Community

    - by pinaldave
    I was recently invited by Hyderabad Techies to deliver a keynote for their 16-day online session called TECH THUNDERS. This event has been running from May 15 and will continue up to the end of the month May 30). There would be a total of 30 sessions. In every evening of those 16 day, there will be either one or two sessions from several noted industry experts. It is the same group which has received the Microsoft Community Impact Award as the Best User Group in India as for developers. I have never talked about Professional Development before. Even if this was my first time to do so, I still accepted the wonderful challenge for the sake of the thousands of audience who were expected to attend this online event. Time is of the essence; I had 15 minutes to deliver the keynote and open the event. The reason why I was nervous was because I had to cover precisely only 15 minutes- no more, no less. If I had an hour, I would have been very confident because I knew I could do a good job for sure. However, I still needed to open the event as great as it can be even if the time was short. I finally created a 6-slide small presentation. In reality, there were only two pages which had the main contents of my keynote, and the remaining slides were just wrappers and decors. You can download the complete slide deck from here. The image used in the slide deck is a curtsy of blog reader Roger Smith who sent it to me. The slide in which I spent a good amount of time is the slide which talks about Professional Development. The content of the slide is as follows: Today, Technology and You Keep your eyes, ears and senses open – Stay Active! You are not the first one who faced the problem – Search Online! Learn the web – Blogs, Forums and Friends! Trust the Technology, Not Print – Test Everything! Community and You! I had a very little time creating the slide deck as I was busy the whole day doing the Advanced SQL Server Training. I had put together these slides during the tea/coffee break of my session. Though it was just a six-bullet point, I had received quite a few emails right after keynote requesting me to talk more about this subject and share the details of my slide deck. I have talked with the event organizer and he will put the keynote online very soon. The subject of the talk is very simple; it revolves around the community. Time has changed, and Internet has come a long way from where it was many years ago. Now that we are all connected, help via the Internet and useful software is easily available around us. In fact, RSS, Newletters and few other technologies have progressed so much that the help through news is now being delivered to our door steps, instead of going out and seeking them. Sometimes, a simple search online solves a lot of problems of many developers. The community is now the first stop for any developer when he or she needs help or just wants to hang around and share some thoughts. I strongly suggest everybody to be a part of the Tech Community. Be it online, offline community or just a local user group, I strongly advise all of you to get involved. I am active in the Community, and I must say I recommend getting drawn into it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL User Group, SQLAuthority News, T SQL, Technology Tagged: Community

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

  • Silverlight Cream for March 06, 2011 -- #1054

    - by Dave Campbell
    In this Back from the Summit Issue, I am overloaded with posts to choose from. Submittals go first, but I'll eventually catch up... hopefully by MIX :) : Ollie Riches(-2-), Colin Eberhardt, John Papa, Jeremy Likness, Martin Krüger, Joost van Schaik, Karl Shifflett, Michael Crump, Georgi Stoyanov, Yochay Kiriaty, Page Brooks, and Deborah Kurata. Above the Fold: Silverlight: "ClassifiedCabinet: A Quick Start" Georgi Stoyanov WP7: "Easy access to WMAppManifest.xml App properties like version and title" Joost van Schaik Multiple: "Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone" Yochay Kiriaty Shoutouts: Mohamed Mosallem delivered an online session at the Second Riyadh Online Community Summit: Silverlight 4.0 with SharePoint 2010 John-Daniel Trask posted about a release of a new set of tools released for WP7 development... there's a free trial, so definitely worth a look: Mindscape Phone Elements released! From SilverlightCream.com: WP7Contrib: Trickling data to a bound collection Ollie Riches submitted a couple links... first up is this on a way they found to decrease the load on a data template in WP7 to get under the 90 mb limit and then added their solution to the WP7Contrib lib. WP7Contrib: Why we use SilverlightSerializer instead of DataContractSerializer Ollie Riches' next submittal compares the performance of the SilverlightSerializer & DataContractSerializer on the WP7 platform. MVVM Charting – Binding Multiple Series to a Visiblox Chart Colin Eberhardt sent me this post where he describes binding multiple series to a chart with no code-behind... great long multi-phase tutorial all with source. Silverlight TV 64: Dive into 64bit Support, App Model and Security John Papa has Nick Kramer of the Silverlight team up for his latest Silverlight TV episode, discussing some cool new Silverlight stuff: 64-bit support, multiple windows, etc. Building a Windows Phone 7 Application with UltraLight.mvvm Jeremy Likness has a pre-summit tutorial up on his UltraLight.mvvm project, and how he would use it to build a WP7 app... great to meet you, Jeremy! How to: Storyboard only start with the conspicuousness of the application in the browser window Martin Krüger continues his Storyboard startup solutions with this one about what to do if the Silverlight app is small or simply an island on an html page. Easy access to WMAppManifest.xml App properties like version and title Joost van Schaik posted about the WP7 manifest file and how you can get access to that information at runtime... why you ask? How about version number or title? Be sure to read the helpful hints in the last paragraph too! Mole 2010 Released Karl Shifflett, Josh Smith, and others have released the latest version of Mole... well worth the money in my opinion, if only it worked for Silverlight! (not their fault) Changing the Default Windows Phone 7 Deployment Target In Visual Studio 2010 Michael Crump points out an annoyance with the 2011 WP7 tools update... VS2010 defaults to the device rather than the emulator... and he shows us how to get it pointed back to the emulator! ClassifiedCabinet: A Quick Start Georgi Stoyanov posted a QuickStart to a 'ClassifiedCabinet' control posted on CodePlex... check out the demo first, you'll want to read the article after that. He builds a simple project from scratch using the control. Flashcards.Show Version 2 for the Desktop, Browser, and Windows Phone Yochay Kiriaty has a post up about FlashCards.Show version 2 that he worked on with Arik Poznanski and has it now running on the desktop, browser, and WP7, plus you get the source... I've been wanting to write just such an app for WP7, so hey... this saves me some time! A Simple Focus Manager for Jounce Applications Page Brooks has a post up about Jeremy Likness' Jounce... how to set focus to a particular control when a view loads. Silverlight Charting: Formatting the Axis Deborah Kurata is continuing her charting series with this one on setting axis font color and putting the text at an angle... really dresses up the chart! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • links for 2011-01-13

    - by Bob Rhubart
    Webcast: Oracle WebCenter Suite: Giving Users a Modern Experience Speakers: Vince Casarez (VP Enterprise 2.0 Product Management, Oracle),  Erin Smith (Consulting Practice Manager – Portals, Oracle), Robert Wessa (Consulting Technical Director – Enterprise 2.0 Infrastructure, Oracle)  (tags: oracle otn webcenter webcast enterprise2.0) Oracle & StickyMinds.com Webcast: Load Testing Techniques for Enterprise Applications Mughees Minhas, Senior Director of Product Management, Oracle Server Technologies, answers your questions about the latest techniques for effectively and efficiently testing enterprise application performance. Thursday, January 20, 2011. 10am PT / 1pm ET. (tags: oracle otn stickymings webcast) Bay Area Coherence Special Interest Group (BACSIG) Jan 20, 5:30pm - 8:00pm PT. Presentations: Coherence 3.6 Clustering Features (Rob Lee), Efficient Management and Update of Coherence Clusters to Reduce Down Time ( Rao Bhethanabotla), How To Build a Coherence Practice (Christer Fahlgren). (tags: oracle, otn coherence bacsig) Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture (ArchBeat) A four-part interview with the authors of  "Business Architecture: The Art and Practice of Business Transformation"  (tags: oracle otn podcast businessarchitecture) John Brunswick: Overlapping Social Networks in your Enterprise? Strategies to Understand and Govern "Overall it is important to consider if tacit knowledge being captured by the social systems is able to be retained and somehow summarized into an overall organizational directory." - John Brunswick (tags: oracle otn enterprise2.0 socialnetworking) Coherence - How to develop a custom push replication publisher (Middlewarepedia) Cosmin Todur describes "a way of developing a custom push replication publisher that publishes data to a database via JDBC."  (tags: oracle coherence grid) Aino Andriessen: Oracle Diagnostics Logging (ODL) for application development "Logging is a very important aspect of application development as it offers run-time access to the behaviour and data of the application. It’s important for debugging purposes but also to investigate exception situations on production." -- Aino Andriessen (tags: oracle odl java jdeveloper weblogic) Security issues when upgrading a Web Catalog from 10g to 11g Oracle BI By Bakboord "I blogged about upgrading from Oracle BI EE 10g to Oracle BI EE 11g R1 earlier. Although this is a very straight forward process, you could end up with some security issues." -- Daan Bakboord (tags: oracle businessintelligence obiee) Angelo Santagata: SOA Composite Sensors : Good Practice "A good best practice is that for any composites you create, consider publishing a composite sensor value using a primary key of some sort , e.g. orderId, that way if you need to manipulate/query composites you can easily look up the instanceId using the sensorid." - Angelo Santagata (tags: oracle soa sca) Javier Ductor: WebCenter Spaces 11g PS2 Task Flow Customization "Previously, I wrote about Spaces Template Customization. In order to adapt Spaces to customers prototype, it was necessary to change template and skin, as well as the members task flow. In this entry, I describe how to customize this task flow." - Javier Ductor (tags: oracle otn enterprise2.0 webcenter) RonBatra's blog: Cloud Computing Series: VI: Industry Directions "When someone says their 'Product/Solution is in the Cloud,' ask them basic questions to seperate the spin from the reality. I would start with 'tell me what that means' and see which way the conversation goes." - Oracle ACE Director Ron Batra (tags: oracle otn oracleace cloud) First JSRs Proposed for Java EE 7 (The Java Source) With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform. (tags: oracle java jsr javaee)

    Read the article

  • Travelling MVP #4: DevReach 2012

    - by DigiMortal
    Our next stop after Varna was Sofia where DevReach happens. DevReach is one of my favorite conferences in Europe because of sensible prices and strong speakers line-up. Also they have VIP-party after conference and this is good event to meet people you don’t see every day, have some discussion with speakers and find new friends. Our trip from Varna to Sofia took about 6.5 hours on bus. As I was tired from last evening it wasn’t problem for me as I slept half the trip. After smoking pause in Velike Tarnovo I watched movies from bus TV. We had supper later in city center Happy’s – place with good meat dishes and nice service. And next day it begun…. :) DevReach 2012 DevReach is held usually in Arena Mladost. It’s near airport and Telerik office. The event is organized by local MVP Martin Kulov together with Telerik. Two days of sessions with strong speakers is good reason enough for me to go to visit some event. Some topics covered by sessions: Windows 8 development web development SharePoint Windows Azure Windows Phone architecture Visual Studio Practically everybody can find some interesting session in every time slot. As the Arena is not huge it is very easy to go from one sessions to another if selected session for time slot is not what you expected. On the second floor of Arena there are many places where you can eat. There are simple chunk-food places like Burger King and also some restaurants. If you are hungry you will find something for your taste for sure. Also you can buy beer if it is too hot outside :) Weather was very good for October – practically Estonian summer – 25C and over. Sessions I visited Here is the list of sessions I visited at DevReach 2012: DevReach 2012 Opening & Welcome Messsage with Martin Kulov and Stephen Forte Principled N-Tier Solution Design with Steve Smith Data Patterns for the Cloud with Brian Randell .NET Garbage Collection Performance Tips with Sasha Goldshtein Building Secured, Scalable, Low-latency Web Applications with the Windows Azure Platform with Ido Flatow It’s a Knockout! MVVM Style Web Applications with Charles Nurse Web Application Architecture – Lessons Learned from Adobe Brackets with Brian Rinaldi Demystifying Visual Studio 2012 Performance Tools with Martin Kulov SPvNext – A Look At All the Exciting And New Features In SharePoint with Sahil Malik Portable Libraries – Why You Should Care with Lino Tadros I missed some sessions because of some death march projects that are going and that I have to coordinate but it was not big loss as I had time to walk around in session venue neighborhood and see Sofia Business Park. Next year again! I will be there again next year and hopefully more guys from Estonia will join me. I think it’s good idea to take short vacation for DevReach time and do things like we did this time – Bucharest, Varna, Sofia. It’s only good idea to plan some more free time so we are not very much in hurry and also we have no work stuff to do on the trip. This far this trip has been one of best trips I have organized and I will go and meet all those guys in this region again! :)

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Hierarchical Query using a Recursive CTE

    - by pinaldave
    This blog post is inspired from SQL Queries Joes 2 Pros: SQL Query Techniques For Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 2.[Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Hierarchical Query using a Recursive CTE – A Primer. In the article we discussed various basics terminology of the CTE. The article further covers following important concepts of common table expression. What is a Common Table Expression (CTE) Building a Recursive CTE Identify the Anchor and Recursive Query Add the Anchor and Recursive query to a CTE Add an expression to track hierarchical level Add a self-referencing INNER JOIN statement Above six are the most important concepts related to CTE and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have an employee table with the following data. EmpID FirstName LastName MgrID 1 David Kennson 11 2 Eric Bender 11 3 Lisa Kendall 4 4 David Lonning 11 5 John Marshbank 4 6 James Newton 3 7 Sally Smith NULL You need to write a recursive CTE that shows the EmpID, FirstName, LastName, MgrID, and employee level. The CEO should be listed at Level 1. All people who work for the CEO will be listed at Level 2. All of the people who work for those people will be listed at Level 3. Which CTE code will achieve this result? WITH EmpList AS (SELECT Boss.EmpID, Boss.FName, Boss.LName, Boss.MgrID, 1 AS Lvl FROM Employee AS Boss WHERE Boss.MgrID IS NULL UNION ALL SELECT E.EmpID, E.FirstName, E.LastName, E.MgrID, EmpList.Lvl + 1 FROM Employee AS E INNER JOIN EmpList ON E.MgrID = EmpList.EmpID) SELECT * FROM EmpList WITH EmpListAS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID IS NULL UNION ALL SELECT EmpID, FirstName, LastName, MgrID, 2 as Lvl ) SELECT * FROM BossList WITH EmpList AS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID is NOT NULL UNION SELECT EmpID, FirstName, LastName, MgrID, BossList.Lvl + 1 FROM Employee INNER JOIN EmpList BossList ON Employee.MgrID = BossList.EmpID) SELECT * FROM EmpList 2) You have a table named Employee. The EmployeeID of each employee’s manager is in the ManagerID column. You need to write a recursive query that produces a list of employees and their manager. The query must also include the employee’s level in the hierarchy. You write the following code segment: WITH EmployeeList (EmployeeID, FullName, ManagerName, Level) AS ( –PICK ANSWER CODE HERE ) SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName mgr.FullName, 1 + 1 AS [Level] FROM Employee emp JOIN Employee mgr ON emp.ManagerID = mgr.EmployeeId SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName, mgr.FullName, mgr.Level + 1 FROM EmployeeList mgr JOIN Employee emp ON emp.ManagerID = mgr.EmployeeId Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 2 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • GRUB 2 problem after Mac OS X update

    - by vallllll
    I have a MacBook Pro in dual boot Mac OS X / Ubuntu 12.04 (Precise Pangolin). When I boot it I have a rEFIt menu, and I can chose between Mac OS X and Linux. A few days ago I have updated Mac OS X from 10.7 (Lion) to 10.8 (Mountain Lion) using a .dmg image provided by my company. Since then when I select Linux in rEFIt it says: No bootable device --insert boot disk and press any key I have tried going to rEFIt partitioning tool. This is what I got: As suggested in Mac OSX Mavericks update rEFIT broken I wanted to fix the issue the same way as AndrewM, but I don't have the option "MBR table must be updated". Then I booted on Ubuntu 12.04 CD, chose repair broken system, chose root patition /dev/sda6 as this is where my Ubuntu file system is. I got a shell, but I don't really know how to repair the poblem since if it was just Windows dual boot. A GRUB update would solve the issue, but here I don't know where the GRUB 2 is installed. Here are results from Parted, and it is a bit confusing for me as the Mac partition is the one with boot: As you can see the entry 1 is an EFI system partition and is the boot partition, so I wonder if I should install GRUB there or in sda6, which is the Ubuntu filesystem. I am not sure should I work on rEFIt shell or Ubuntu. Unfortunately, I don't remember where GRUB was before update. UPDATE: using same link above I have tried RoundSparrow hilltx answer and installed rEFInd, but the result is same.... still no bootable device when I select Linux. UPDATE 2: just used alternate CD again, mounted on /dev/sda6 and the ran update-grub. It seemed to wok and started listing all my kernels. But after rebooting several times still no bootable device when I select Linux in rEFInd. UDATE 3: Have tried to boot from Ubuntu cd and select "boot from first available filesystem. I got error and dropped to grub rescue shell. I even followed the indications on this link but was unable to boot as I tried to use sdb6 but no luck UPDATE 4 as per Rob Smith request here is out put from ls -l $(find /EFI -iname "*.efi") *MACOSX -rw-r--r--@ 1 root admin 55048 29 oct 17:44 /EFI/refind/drivers_x64/btrfs_x64.efi -rw-r--r--@ 1 root admin 38888 29 oct 17:44 /EFI/refind/drivers_x64/ext2_x64.efi -rw-r--r--@ 1 root admin 39304 29 oct 17:44 /EFI/refind/drivers_x64/ext4_x64.efi -rw-r--r--@ 1 root admin 43432 29 oct 17:44 /EFI/refind/drivers_x64/hfs_x64.efi -rw-r--r--@ 1 root admin 38984 29 oct 17:44 /EFI/refind/drivers_x64/iso9660_x64.efi -rw-r--r--@ 1 root admin 43656 29 oct 17:44 /EFI/refind/drivers_x64/reiserfs_x64.efi -rw-r--r--@ 1 root admin 175016 29 oct 17:44 /EFI/refind/refind_x64.efi -rw-rw-r-- 1 root admin 73232 7 mar 2010 /EFI/tools/dbounce.efi -rw-rw-r-- 1 root admin 763248 7 mar 2010 /EFI/tools/dhclient.efi -rw-rw-r-- 1 root admin 67024 7 mar 2010 /EFI/tools/drawbox.efi -rw-rw-r-- 1 root admin 71312 7 mar 2010 /EFI/tools/dumpfv.efi -rw-rw-r-- 1 root admin 84848 7 mar 2010 /EFI/tools/dumpprot.efi -rw-rw-r-- 1 root admin 472912 7 mar 2010 /EFI/tools/ed.efi -rw-rw-r-- 1 root admin 143856 7 mar 2010 /EFI/tools/edit.efi -rw-rw-r-- 1 root admin 1801008 7 mar 2010 /EFI/tools/ftp.efi -rw-r--r--@ 1 root admin 47848 29 oct 17:44 /EFI/tools/gptsync_x64.efi -rw-rw-r-- 1 root admin 320560 7 mar 2010 /EFI/tools/hexdump.efi -rw-rw-r-- 1 root admin 286384 7 mar 2010 /EFI/tools/hostname.efi -rw-rw-r-- 1 root admin 534416 7 mar 2010 /EFI/tools/ifconfig.efi -rw-rw-r-- 1 root admin 395344 7 mar 2010 /EFI/tools/loadarg.efi -rw-rw-r-- 1 root admin 587408 7 mar 2010 /EFI/tools/ping.efi -rw-rw-r-- 1 root admin 730416 7 mar 2010 /EFI/tools/pppd.efi -rw-rw-r-- 1 root admin 561360 7 mar 2010 /EFI/tools/route.efi -rw-rw-r-- 1 root admin 1961712 7 mar 2010 /EFI/tools/shell.efi -rw-rw-r-- 1 root admin 750224 7 mar 2010 /EFI/tools/tcpipv4.efi -rw-rw-r-- 1 root admin 4048 7 mar 2010 /EFI/tools/textmode.efi -rw-rw-r-- 1 root admin 320656 7 mar 2010 /EFI/tools/which.efi *LINUX

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Automated SSRS deployment with the RS utility

    - by Stacy Vicknair
    If you’re familiar with SSRS and development you are probably aware of the SSRS web services. The RS utility is a tool that comes with SSRS that allows for scripts to be executed against against the SSRS web service without needing to create an application to consume the service. One of the better benefits of using this format rather than writing an application is that the script can be modified by others who might be involved in the creation and addition of scripts or management of the SSRS environment.   Reporting Services Scripter Jasper Smith from http://www.sqldbatips.com created Reporting Services Scripter to assist with the created of a batch process to deploy an entire SSRS environment. The helper scripts below were created through the modification of his generated scripts. Why not just use this tool? You certainly can. For me, the volume of scripts generated seems less maintainable than just using some common methods extracted from these scripts and creating a deployment in a single script file. I would, however, recommend this as a product if you do not think that your environment will change drastically or if you do not need to deploy with a higher level of control over the deployment. If you just need to replicate, this tool works great. Executing with RS.exe Executing a script against rs.exe is fairly simple. The Script Half the battle is having a starting point. For the scripting I needed to do the below is the starter script. A few notes: This script assumes integrated security. This script assumes your reports have one data source each. Both of the above are just what made sense for my scenario and are definitely modifiable to accommodate your needs. If you are unsure how to change the scripts to your needs, I recommend Reporting Services Scripter to help you understand how the differences. The script has three main methods: CreateFolder, CreateDataSource and CreateReport. Scripting the server deployment is just a process of recreating all of the elements that you need through calls to these methods. If there are additional elements that you need to deploy that aren’t covered by these methods, again I suggest using Reporting Services Scripter to get the code you would need, convert it to a repeatable method and add it to this script! Public Sub Main() CreateFolder("/", "Data Sources") CreateFolder("/", "My Reports") CreateDataSource("/Data Sources", "myDataSource", _ "Data Source=server\instance;Initial Catalog=myDatabase") CreateReport("/My Reports", _ "MyReport", _ "C:\myreport.rdl", _ True, _ "/Data Sources", _ "myDataSource") End Sub   Public Sub CreateFolder(parent As String, name As String) Dim fullpath As String = GetFullPath(parent, name) Try RS.CreateFolder(name, parent, GetCommonProperties()) Console.WriteLine("Folder created: {0}", name) Catch e As SoapException If e.Detail.Item("ErrorCode").InnerText = "rsItemAlreadyExists" Then Console.WriteLine("Folder {0} already exists and cannot be overwritten", fullpath) Else Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End If End Try End Sub   Public Sub CreateDataSource(parent As String, name As String, connectionString As String) Try RS.CreateDataSource(name, parent,False, GetDataSourceDefinition(connectionString), GetCommonProperties()) Console.WriteLine("DataSource {0} created successfully", name) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub   Public Sub CreateReport(parent As String, name As String, location As String, overwrite As Boolean, dataSourcePath As String, dataSourceName As String) Dim reportContents As Byte() = Nothing Dim warnings As Warning() = Nothing Dim fullpath As String = GetFullPath(parent, name)   'Read RDL definition from disk Try Dim stream As FileStream = File.OpenRead(location) reportContents = New [Byte](stream.Length-1) {} stream.Read(reportContents, 0, CInt(stream.Length)) stream.Close()   warnings = RS.CreateReport(name, parent, overwrite, reportContents, GetCommonProperties())   If Not (warnings Is Nothing) Then Dim warning As Warning For Each warning In warnings Console.WriteLine(Warning.Message) Next warning Else Console.WriteLine("Report: {0} published successfully with no warnings", name) End If   'Set report DataSource references Dim dataSources(0) As DataSource   Dim dsr0 As New DataSourceReference dsr0.Reference = dataSourcePath Dim ds0 As New DataSource ds0.Item = CType(dsr0, DataSourceDefinitionOrReference) ds0.Name=dataSourceName dataSources(0) = ds0     RS.SetItemDataSources(fullpath, dataSources)   Console.Writeline("Report DataSources set successfully")       Catch e As IOException Console.WriteLine(e.Message) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub     Public Function GetCommonProperties() As [Property]() 'Common CatalogItem properties Dim descprop As New [Property] descprop.Name = "Description" descprop.Value = "" Dim hiddenprop As New [Property] hiddenprop.Name = "Hidden" hiddenprop.Value = "False"   Dim props(1) As [Property] props(0) = descprop props(1) = hiddenprop Return props End Function   Public Function GetDataSourceDefinition(connectionString as String) Dim definition As New DataSourceDefinition definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = "SQL" definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True definition.Prompt = "Enter a user name and password to access the data source:" definition.WindowsCredentials = False definition.OriginalConnectStringExpressionBased = False definition.UseOriginalConnectString = False Return definition End Function   Private Function GetFullPath(parent As String, name As String) As String If parent = "/" Then Return parent + name Else Return parent + "/" + name End If End Function

    Read the article

  • Have you ever wondered...?

    - by diana.gray
    I've often wondered why folks do the same thing over and over. For some of us, it's because we "don't get it" and there's an abundance of TV talk shows that will help us analyze the why of it. Dr. Phil is all too eager to ask "...and how's that working for you?". But I'm not referring to being stuck in a destructive pattern or denial. I'm really talking about doing something over and over because you have found a joy, a comfort, a boost of energy from an activity or event. For example, how many times have I planted bulbs in November or December only to be amazed by their reach, colors, and fragrance in early spring? Or baked fresh cookies and allowed the aroma to fill the house? Or kissed a sleeping baby held gently in my arms and being reminded of how tiny and fragile we all are. I've often wondered why it is that I get so much out of something I've done so many times. I think it's because I've changed. The activity may be the same but in the preceding days, months and years I've had new experiences, challenges, joys and sorrows that have shaped me. I'm different. The same is true about attending the Professional Businesswomen of California (PBWC) conference. Although the conference is an annual event held at San Francisco's Moscone Center, I still enjoy being with 3,000 other women like me. Yes, we work at different companies and in different industries, have different lifestyles and are at different stages in our professional careers and personal lives; but we are all alike in that we bring the NEW me each year that we attend. This year I can cheer when Safra Catz, President of Oracle, encourages us to trust our intuition; that "if something doesn't make sense, it doesn't make sense". And I can warmly introduce myself to Lisa Askins, Cheryl Melching's business partner at Center Stage Group, when I would have been too intimated to do so last year. This year I can commit to new challenges such as "no whining, no excuses and no gossip" as suggested by Roxanne Emmerich, a goal that I would have wavered on last year. I can also embrace the suggestion given by Dr. Ian Smith to "spend one hour each day" on me - giving myself time to rejuvenate. A friend, when asked if she was attending PBWC this year, said "I've attended the conference several times and there's nothing new!" My perspective is that WE are what makes PBWC's annual conference new. We are far different in 2010 than we were in 2009. We are learning, growing, developing and shedding and that's what makes the conference fresh, vibrant, rewarding, and lasting. It is the diversity of women coming together that makes it new. By sharing our experiences, we discover. By meeting with one another professionally and personally, we connect. And by applying the wisdom learned, we shine. We are reNEW-ed. It shows in our fresh ideas, confident interactions, strategic decisions and successful businesses. This refreshed approach is what our companies want and need, our families depend on, our communities and nation look to for creative solutions to pressing concerns. Thanks Oracle for your continued support and thanks PBWC for providing an annual day to be reNEW-ed.

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Silverlight 4 Released

    - by ScottGu
    The final release of Silverlight 4 is now available. What is in the Silverlight 4 Release Silverlight 4 contains a ton of new features and capabilities.  In particular we focused on three scenarios with this release: Further enhancing media support Building great business applications Enabling out of the browser experiences On Tuesday I gave a 60 minute keynote about Silverlight 4 which showed off many of the new features and capabilities now available.  You can watch my keynote to learn more about Silverlight 4 and see a ton of great demos of it in action. Also check out these three great posts by Tim Heuer that talk about the new features and provide a guide to the new Silverlight 4 capabilities: Silverlight 4 Beta – A Guide to the New Features Silverlight 4 RC – What was updated Silverlight 4 Released Also read David Anson’s great Silverlight 4 Toolkit post to learn more about the new controls and functionality also available within the Silverlight Toolkit release we also made available today.  Also visit this page to learn more about the new Pivot functionality in Silverlight 4 – which makes it really easy to visualize and interact with collections of images using Silverlight. Lastly – make sure to visit the www.silverlight.net web-site and visit the “Get Started” section to find free tutorials that you can use. Download and Install Silverlight 4 Tools for VS 2010 To develop Silverlight 4 applications you should first download and install Visual Studio 2010 or download and install the free Visual Web Developer 2010 Express edition. Then install the Silverlight Tools RC2 for Visual Studio 2010.  This setup includes the Silverlight 4 Developer Runtime, Silverlight 4 SDK, RIA Services, and VS 2010 tools support.  Once installed you can do File->New Project and choose Silverlight Application to create your first Silverlight 4 project.  You can then use the new WYSIWYG Silverlight designer in Visual Studio 2010 to design and build rich Silverlight 4 applications. Important: If you previously installed the Silverlight 4 Beta or RC build on your machine, please make sure to go into Add/Remove programs and uninstall the “Update for Visual Studio 2010 (KB976272)” package prior to installing the Silverlight Tools RC2 for Visual Studio 2010 setup.  Note that while Silverlight 4 is released, the “Silverlight 4 Tools for VS 2010” is currently in “RC2” mode (meaning we are going to keep an eye out for any remaining issues before finally calling it done).  We’ll update the tools to be “final” in a few weeks once we verify that no last minute issues/bugs remain. Download and Install Expression Blend 4 Release Candidate You can also download and install the Expression Blend 4 RC to create and design great Silverlight 4 applications.  Blend contains “Sketchflow” support – which makes it really easy to rapidly prototype ideas and applications.  To learn more about Sketchflow watch this 90 second video of it in action. Summary Today’s release is the fourth release of Silverlight that we’ve shipped in the last 2.5 years.  The team has done a great job of advancing it quickly and staying focused.  We think today’s Silverlight 4 release opens up a ton of new opportunities to build great solutions for both consumers and business scenarios.  We are looking forward to seeing what you build with it! Hope this helps, Scott

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Feb 2nd Links: Visual Studio, ASP.NET, ASP.NET MVC, JQuery, Windows Phone

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my Best of 2010 Summary for links to 100+ other posts I’ve done in the last year. [I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Community News MVCConf Conference Next Wednesday: Attend the free, online ASP.NET MVC Conference being organized by the community next Wednesday.  Here is a list of some of the talks you can watch live. Visual Studio HTML5 and CSS3 in VS 2010 SP1: Good post from the Visual Studio web tools team that talks about the new support coming in VS 2010 SP1 for HTML5 and CSS3. Database Deployment with the VS 2010 Package/Publish Database Tool: Rachel Appel has a nice post that covers how to enable database deployment using the built-in VS 2010 web deployment support.  Also check out her ASP.NET web deployment post from last month. VsVim Update Released: Jared posts about the latest update of his VsVim extension for Visual Studio 2010.  This free extension enables VIM based key-bindings within VS. ASP.NET How to Add Mobile Pages to your ASP.NET Web Forms / MVC Apps: Great whitepaper by Steve Sanderson that covers how to mobile-enable your ASP.NET and ASP.NET MVC based applications. New Entity Framework Tutorials for ASP.NET Developers: The ASP.NET and EF teams have put together a bunch of nice tutorials on using the Entity Framework data library with ASP.NET Web Forms. Using ASP.NET Dynamic Data with EF Code First (via NuGet): Nice post from David Ebbo that talks about how to use the new EF Code First Library with ASP.NET Dynamic Data. Common Performance Issues with ASP.NET Web Sites: Good post with lots of performance tuning suggestions (mostly deployment settings) for ASP.NET apps. ASP.NET MVC Razor View Converter: Free, automated tool from Terlik that can convert existing .aspx view templates to Razor view templates. ASP.NET MVC 3 Internationalization: Nadeem has a great post that talks about a variety of techniques you can use to enable Globalization and Localization within your ASP.NET MVC 3 applications. ASP.NET MVC 3 Tutorials by David Hayden: Great set of tutorials and posts by David Hayden on some of the new ASP.NET MVC 3 features. EF Fixed Concurrency Mode and MVC: Chris Sells has a nice post that talks about how to handle concurrency with updates done with EF using ASP.NET MVC. ASP.NET and jQuery jQuery Performance Tips and Tricks: A free 30 minute video that covers some great tips and tricks to keep in mind when using jQuery. jQuery 1.5’s AJAX rewrite and ASP.NET services - All is well: Nice post by Dave Ward that talks about using the new jQuery 1.5 to call ASP.NET ASMX Services. Good news according to Dave is that all is well :-) jQuery UI Modal Dialogs for ASP.NET MVC: Nice post by Rob Regan that talks about a few approaches you can use to implement dialogs with jQuery UI and ASP.NET MVC.  Windows Phone 7 Free PDF eBook on Building Windows Phone 7 Applications with Silverlight: Free book that walksthrough how to use Silverlight and Visual Studio to build Windows Phone 7 applications. Hope this helps, Scott

    Read the article

  • Some VS 2010 RC Updates (including patches for Intellisense and Web Designer fixes)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] We are continuing to make progress on shipping Visual Studio 2010.  I’d like to say a big thank you to everyone who has downloaded and tried out the VS 2010 Release Candidate, and especially to those who have sent us feedback or reported issues with it. This data has been invaluable in helping us find and fix remaining bugs before we ship the final release. Last month I blogged about a patch we released for the VS 2010 RC that fixed a bad intellisense crash issue.  This past week we released two additional patches that you can download and apply to the VS 2010 RC to immediately fix two other common issues we’ve seen people run into: Patch that fixes crashes with Tooltip invocation and when hovering over identifiers The Visual Studio team recently released a second patch that fixes some crashes we’ve seen when tooltips are displayed – most commonly when hovering over an identifier to view a QuickInfo tooltip. You can learn more about this issue from this blog post, and download and apply the patch here. Patch that fixes issues with the Web Forms designer not correctly adding controls to the auto-generated designer files The Visual Web Developer team recently released a patch that fixes issues where web controls are not correctly added to the .designer.cs file associated with the .aspx file – which means they can’t be programmed against in the code-behind file.  This issue is most commonly described as “controls are not being recognized in the code-behind” or “editing existing .aspx files regenerates the .aspx.designer.(vb or cs) file and controls are now missing” or “I can’t embed controls within the Ajax Control Toolkit TabContainer or the <asp:createuserwizard> control”. You can learn more about the issue here, and download the patch that fixes it here. Common Cause of Intellisense and IDE sluggishness on Windows XP, Vista, Win Server 2003/2008 systems Over the last few months we’ve occasionally seen reports of people seeing tremendous slowness when typing and using intellisense within VS 2010 despite running on decent machines.  It took us awhile to track down the cause – but we have found that the common culprit seems to be that these machines don’t have the latest versions of the UIA (Windows Automation) component installed. UIA 3 ships with Windows 7, and is a recommended Windows Update patch on XP and Vista (which is why we didn’t see the problem in our tests – since our machines are patched with all recommended updates).  Many systems (especially on XP) don’t automatically install recommended updates, though, and are running with older versions of UIA. This can cause significant performance slow-downs within the VS 2010 editor when large lists are displayed (for example: with intellisense). If you are running on Windows XP, Vista, or Windows Server 2003 or 2008 and are seeing any performance issues with the editor or IDE, please install the free UIA 3 update that can be downloaded from this page.  If you scroll down the page you’ll find direct links to versions for each OS. Note that we are making improvements to the final release of VS 2010 so that we don’t have big perf issues when UIA 3 isn’t installed – and we are also adding a message within the IDE that will warn you if you don’t have UIA 3 installed and accessibility is activated. Improved Text Rendering with WPF 4 and VS 2010 We recently made some nice changes to WPF 4 which improve the text clarity and text crispness over what was in the VS 2010/.NET 4 Release Candidate.  In particular these changes improve scenarios where you have a dark background with light text. You can learn more about these improvements in this WPF Team blog post.  These changes will be in the final release of VS 2010 and .NET 4. Hope this helps, Scott

    Read the article

  • Automatic Properties, Collection Initializers, and Implicit Line Continuation support with VB 2010

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the eighteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. A few days ago I blogged about two new language features coming with C# 4.0: optional parameters and named arguments.  Today I’m going to post about a few of my favorite new features being added to VB with VS 2010: Auto-Implemented Properties, Collection Initializers, and Implicit Line Continuation support. Auto-Implemented Properties Prior to VB 2010, implementing properties within a class using VB required you to explicitly declare the property as well as implement a backing field variable to store its value.  For example, the code below demonstrates how to implement a “Person” class using VB 2008 that exposes two public properties - “Name” and “Age”:   While explicitly declaring properties like above provides maximum flexibility, I’ve always found writing this type of boiler-plate get/set code tedious when you are simply storing/retrieving the value from a field.  You can use VS code snippets to help automate the generation of it – but it still generates a lot of code that feels redundant.  C# 2008 introduced a cool new feature called automatic properties that helps cut down the code quite a bit for the common case where properties are simply backed by a field.  VB 2010 also now supports this same feature.  Using the auto-implemented properties feature of VB 2010 we can now implement our Person class using just the code below: When you declare an auto-implemented property, the VB compiler automatically creates a private field to store the property value as well as generates the associated Get/Set methods for you.  As you can see above – the code is much more concise and easier to read. The syntax supports optionally initializing the properties with default values as well if you want to: You can learn more about VB 2010’s automatic property support from this MSDN page. Collection Initializers VB 2010 also now supports using collection initializers to easily create a collection and populate it with an initial set of values.  You identify a collection initializer by declaring a collection variable and then use the From keyword followed by braces { } that contain the list of initial values to add to the collection.  Below is a code example where I am using the new collection initializer feature to populate a “Friends” list of Person objects with two people, and then bind it to a GridView control to display on a page: You can learn more about VB 2010’s collection initializer support from this MSDN page. Implicit Line Continuation Support Traditionally, when a statement in VB has been split up across multiple lines, you had to use a line-continuation underscore character (_) to indicate that the statement wasn’t complete.  For example, with VB 2008 the below LINQ query needs to append a “_” at the end of each line to indicate that the query is not complete yet: The VB 2010 compiler and code editor now adds support for what is called “implicit line continuation support” – which means that it is smarter about auto-detecting line continuation scenarios, and as a result no longer needs you to explicitly indicate that the statement continues in many, many scenarios.  This means that with VB 2010 we can now write the above code with no “_” at all: The implicit line continuation feature also works well when editing XML Literals within VB (which is pretty cool). You can learn more about VB 2010’s Implicit Line Continuation support and many of the scenarios it supports from this MSDN page (scroll down to the “Implicit Line Continuation” section to find details). Summary The above three VB language features are but a few of the new language and code editor features coming with VB 2010.  Visit this site to learn more about some of the other VB language features coming with the release.  Also subscribe to the VB team’s blog to learn more and stay up-to-date with the posts they the team regularly publishes. Hope this helps, Scott

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >