Search Results

Search found 13794 results on 552 pages for 'variable scope'.

Page 212/552 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • Using Unity – Part 3

    - by nmarun
    The previous blog was about registering and invoking different types dynamically. In this one I’d like to show how Unity manages/disposes the instances – say hello to Lifetime Managers. When a type gets registered, either through the config file or when RegisterType method is explicitly called, the default behavior is that the container uses a transient lifetime manager. In other words, the unity container creates a new instance of the type when Resolve or ResolveAll method is called. Whereas, when you register an existing object using the RegisterInstance method, the container uses a container controlled lifetime manager - a singleton pattern. It does this by storing the reference of the object and that means so as long as the container is ‘alive’, your registered instance does not go out of scope and will be disposed only after the container either goes out of scope or when the code explicitly disposes the container. Let’s see how we can use these and test if something is a singleton or a transient instance. Continuing on the same solution used in the previous blogs, I have made the following changes: First is to add typeAlias elements for TransientLifetimeManager type: 1: <typeAlias alias="transient" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> You then need to tell what type(s) you want to be transient by nature: 1: <type type="IProduct" mapTo="Product2"> 2: <lifetime type="transient" /> 3: </type> 4: <!--<type type="IProduct" mapTo="Product2" />--> The lifetime element’s type attribute matches with the alias attribute of the typeAlias element. Now since ‘transient’ is the default behavior, you can have a concise version of the same as line 4 shows. Also note that I’ve changed the mapTo attribute from ‘Product’ to ‘Product2’. I’ve done this to help understand the transient nature of the instance of the type Product2. By making this change, you are basically saying when a type of IProduct needs to be resolved, Unity should create an instance of Product2 by default. 1: public string WriteProductDetails() 2: { 3: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}<br/>Hash Code: {3}", 4: Name, Category, MfgDate.ToString("MM/dd/yyyy hh:mm:ss tt"), GetHashCode()); 5: } Again, the above change is purely for the purpose of making the example more clear to understand. The display will show the full date and also displays the hash code of the current instance. The GetHashCode() method returns an integer when an instance gets created – a new integer for every instance. When you run the application, you’ll see something like the below: Now when you click on the ‘Get Product2 Instance’ button, you’ll see that the Mfg Date (which is set in the constructor) and the Hash Code are different from the one created on page load. This proves to us that a new instance is created every single time. To make this a singleton, we need to add a type alias for the ContainerControlledLifetimeManager class and then change the type attribute of the lifetime element to singleton. 1: <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="singleton" /> 5: </type> Running the application now gets me the following output: Click on the button below and you’ll see that the Mfg Date and the Hash code remain unchanged => the unity container is storing the reference the first time it is created and then returns the same instance every time the type needs to be resolved. Digging more deeper into this, Unity provides more than the two lifetime managers. ExternallyControlledLifetimeManager – maintains a weak reference to type mappings and instances. Unity returns the same instance as long as the some code is holding a strong reference to this instance. For this, you need: 1: <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="external" /> 5: </type> PerThreadLifetimeManager – Unity returns a unique instance of an object for each thread – so this effectively is a singleton behavior on a  per-thread basis. 1: <typeAlias alias="perThread" type="Microsoft.Practices.Unity.PerThreadLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="perThread" /> 5: </type> One thing to note about this is that if you use RegisterInstance method to register an existing object, this instance will be returned for every thread, making this a purely singleton behavior. Needless to say, this type of lifetime management is useful in multi-threaded applications (duh!!). I hope this blog provided some basics on lifetime management of objects resolved in Unity and in the next blog, I’ll talk about Injection. Please see the code used here.

    Read the article

  • PowerShell Script to Create PowerShell Profile

    - by Brian Jackett
    Utilizing a PowerShell profile can help any PowerShell user save time getting up and running with their work.  For those unfamiliar a PowerShell profile is a file you can store any PowerShell commands that you want to run when you fire up a PowerShell console (or ISE.)  In my typical profiles (example here) I load assemblies (like SharePoint 2007 DLL), set aliases, set environment variable values (such as max history), and perform other general customizations to make my work easier.  Below is a sample script that will check to see if a PowerShell profile (Console or ISE) exists and create it if not found.  The .ps1 script file version can also be downloaded from my SkyDrive here. Note: if downloading the .ps1 file, be sure you have enabled unsigned scripts to run on your machine as I have not signed mine.   $folderExists = test-path -path $Env:UserProfile\Documents\WindowsPowerShell if($folderExists -eq $false) { new-item -type directory -path $Env:UserProfile\Documents\WindowsPowerShell > $null echo "Containing folder for profile created at: $Env:UserProfile\Documents\WindowsPowerShell" }   $profileExists = test-path -path $profile if($profileExists -eq $false) { new-item -type file -path $profile > $null echo "Profile file created at: $profile" }     A few things to note while going through the above script. $Env:UserProfile represents the personal user folder (c:\documents and settings…. on older OSes like XP and c:\Users… on Win 7) so it adapts to whichever OS you are running but was tested against Windows 7 and Windows Server 2008 R2. “ > $null” sends the command to a null stream.  Essentially this is equivalent to DOS scripting of “@ECHO OFF” by suppressing echoing the command just run, but only for the specific command it is appended to.  I haven’t yet found a better way to accomplish command suppression, but this is definitely not required for the script to work. $profile represent a standard variable to the file path of the profile file.  It is dynamic based on whether you are running PowerShell Console or ISE.   Conclusion     In less than two weeks (Apr. 10th to be exact) I’ll be heading down to SharePoint Saturday Charlotte (SPSCLT) to give two presentations on using PowerShell with SharePoint.  Since I’ll be prepping a lot of material for PowerShell I thought it only appropriate to pass along this nice little script I recently created.  If you’ve never used a PowerShell profile this is a great chance to start using one.  If you’ve been using a profile before, perhaps you learned a trick or two to add to your toolbox.  For those of you in the Charlotte, NC area sign up for the SharePoint Saturday and see some great content and community with great folks.         -Frog Out

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Android SDK PATH Error: doesn´t find home directory [migrated]

    - by THeo Oliveira
    I use Ubuntu 12.10 and I tried to follow this guide Install android sdk and eclipse in Ubuntu 12.04 but I keep getting this error: Error: The AVD manager normally uses the user's profile directory to store AVD files. However it failed to find the default profile directory. To fix this, please set the environment variable ANDROID_SDK_HOME to a valid path such as ¨~¨. Any ideas how can I fix this?

    Read the article

  • Async CTP (C# 5): How to make WCF work with Async CTP

    - by javarg
    If you have recently downloaded the new Async CTP you will notice that WCF uses Async Pattern and Event based Async Pattern in order to expose asynchronous operations. In order to make your service compatible with the new Async/Await Pattern try using an extension method similar to the following: WCF Async/Await Method public static class ServiceExtensions {     public static Task<DateTime> GetDateTimeTaskAsync(this Service1Client client)     {         return Task.Factory.FromAsync<DateTime>(             client.BeginGetDateTime(null, null),             ar => client.EndGetDateTime(ar));     } } The previous code snippet adds an extension method to the GetDateTime method of the Service1Client WCF proxy. Then used it like this (remember to add the extension method’s namespace into scope in order to use it): Code Snippet var client = new Service1Client(); var dt = await client.GetDateTimeTaskAsync(); Replace the proxy’s type and operation name for the one you want to await.

    Read the article

  • Automapper: Handling NULL members

    - by PSteele
    A question about null members came up on the Automapper mailing list.  While the problem wasn’t with Automapper, investigating the issue led to an interesting feature in Automapper. Normally, Automapper ignores null members.  After all, what is there really to do?  Imagine these source classes: public class Source { public int Data { get; set; } public Address Address { get; set; } }   public class Destination { public string Data { get; set; } public Address Address { get; set; } }   public class Address { public string AddressType { get; set; } public string Location { get; set; } } And imagine a simple mapping example with these classes: Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, Address = new Address { AddressType = "Home", Location = "Michigan", }, };   var dest = Mapper.Map<Source, Destination>(source); The variable ‘dest’ would have a complete mapping of the Data member and the Address member. But what if the source had no address? Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); In that case, Automapper would just leave the Destination.Address member null as well.  But what if we always wanted an Address defined – even if it’s just got some default data?  Use the “NullSubstitute” option: Mapper.CreateMap<Source, Destination>() .ForMember(d => d.Address, o => o.NullSubstitute(new Address { AddressType = "Unknown", Location = "Unknown", }));   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); Now, the ‘dest’ variable will have an Address defined with a type and location of “Unknown”.  Very handy! Technorati Tags: .NET,Automapper,NULL

    Read the article

  • Android SDK PATH Error: doesn´t find home directory

    - by THeo Oliveira
    I use Ubuntu 12.10 and I tried to follow this guide Install android sdk and eclipse in Ubuntu 12.04 but I keep getting this error: Error: The AVD manager normally uses the user's profile directory to store AVD files. However it failed to find the default profile directory. To fix this, please set the environment variable ANDROID_SDK_HOME to a valid path such as ¨~¨. Any ideas how can I fix this?

    Read the article

  • Don’t miss the Procurement AME New Features and Setup for Purchase Orders Webcast on December 6th and Follow up Live Chat

    - by MargaretW
    Webcast This one-hour session on December 6th is recommended for technical and functional users who are interested to know more about the new 12.1.3 features for Procurement with Approval Management Engine (AME). TOPICS WILL INCLUDE: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstrations will be included See DocID 1456150.1 to sign up now! Live Chat There will be a live chat in the Procurement Community on December 13th for follow up questions and answers.  Join us to share and gain knowledge!

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Java EE 7 interview @ InfoQ

    - by alexismp
    Anil Gaur, the head of JavaEE and GlassFish at Oracle, was recently interviewed by InfoQ on the progress and scope of Java EE 7. Make sure you read this for an up-to-date status. It turns out that the final release of Java EE 7 in now slated for late Q1, early Q2, 2013 with most specs expected t be at the Public Review stage by this summer. In addition to the improvements to the JMS, JAX-RS, JPA and JSF specifications Anil also covers the new JSON and WebSockets JSRs and gives a complete overview of the Java EE platform at it is shaping up with PaaS as its main theme.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Wireless cuts out on Toshiba Satellite S7208

    - by alecRN
    I recently got a Toshiba Satellite L875-S7208 with Windows 7 preinstalled. I installed Ubuntu 12.04 LTS dual boot to the same Windows partition. However, usually 15 minutes or less after booting, the wifi connection dies. Here's some hopefully relevant information: lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: i915 Kernel modules: i915 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: xhci_hcd 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 3 [8086:1e14] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: i2c-i801 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:8211] Kernel driver in use: rtl8192ce Kernel modules: rtl8192ce 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) Subsystem: Toshiba America Info Systems Device [1179:fb37] Kernel driver in use: r8169 Kernel modules: r8169 lsmod Module Size Used by snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 224066 1 joydev 17693 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 arc4 12529 2 snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq psmouse 87692 0 serio_raw 13211 0 rtl8192ce 84826 0 rtl8192c_common 75767 1 rtl8192ce rtlwifi 111202 1 rtl8192ce mac80211 506816 3 rtl8192ce,rtl8192c_common,rtlwifi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device sparse_keymap 13890 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev mac_hid 13253 0 mei 41616 0 wmi 19256 0 soundcore 15091 1 snd i915 472941 3 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm drm_kms_helper 46978 1 i915 cfg80211 205544 2 rtlwifi,mac80211 drm 242038 4 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 video 19596 1 i915 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 ums_realtek 18248 0 uas 18180 0 usb_storage 49198 1 ums_realtek dmesg | grep firmware [ 15.692951] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 16.240881] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 452.419288] rtl8192c_common:rtl92c_firmware_selfreset(): 8051 reset fail. [ 458.572211] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 465.440640] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 472.337617] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 479.175471] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 485.978582] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 492.764893] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 499.579348] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 506.386934] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 513.209545] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 519.991365] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 526.778375] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 533.629695] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 540.426004] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 547.238125] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 554.024434] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 560.854794] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 567.678160] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 574.494666] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 581.336653] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 588.157710] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 595.221122] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 602.047429] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 608.829534] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 615.639079] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 622.454991] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 629.273231] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 636.056613] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 642.858096] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 649.640753] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 657.184094] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 664.008018] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 670.838639] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 677.675418] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 684.507255] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 691.310994] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 698.095325] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 704.914509] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 711.725178] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin uname -r 3.2.0-29-generic ifconfig eth0 Link encap:Ethernet HWaddr 4c:72:b9:59:6c:61 inet addr:192.168.0.11 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe59:6c61/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4447 errors:0 dropped:0 overruns:0 frame:0 TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3671147 (3.6 MB) TX bytes:335133 (335.1 KB) Interrupt:42 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:515 errors:0 dropped:0 overruns:0 frame:0 TX packets:515 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:83153 (83.1 KB) TX bytes:83153 (83.1 KB) wlan0 Link encap:Ethernet HWaddr 74:e5:43:32:47:95 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:280 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32958 (32.9 KB) TX bytes:10431 (10.4 KB)

    Read the article

  • readonly keyword

    - by nmarun
    This is something new that I learned about the readonly keyword. Have a look at the following class: 1: public class MyClass 2: { 3: public string Name { get; set; } 4: public int Age { get; set; } 5:  6: private readonly double Delta; 7:  8: public MyClass() 9: { 10: Initializer(); 11: } 12:  13: public MyClass(string name = "", int age = 0) 14: { 15: Name = name; 16: Age = age; 17: Initializer(); 18: } 19:  20: private void Initializer() 21: { 22: Delta = 0.2; 23: } 24: } I have a couple of public properties and a private readonly member. There are two constructors – one that doesn’t take any parameters and the other takes two parameters to initialize the public properties. I’m also calling the Initializer method in both constructors to initialize the readonly member. Now when I build this, the code breaks and the Error window says: “A readonly field cannot be assigned to (except in a constructor or a variable initializer)” Two things after I read this message: It’s such a negative statement. I’d prefer something like: “A readonly field can be assigned to (or initialized) only in a constructor or through a variable initializer” But in my defense, I AM assigning it in a constructor (only indirectly). All I’m doing is creating a method that does it and calling it in a constructor. Turns out, .net was not ‘frameworked’ this way. We need to have the member initialized directly in the constructor. If you have multiple constructors, you can just use the ‘this’ keyword on all except the default constructors to call the default constructor. This default constructor can then initialize your readonly members. This will ensure you’re not repeating the code in multiple places. A snippet of what I’m talking can be seen below: 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha; 10: private readonly int Beta; 11: private readonly double Delta; 12: private readonly double Gamma; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: } 26: } See the syntax in line 22 and you’ll know what I’m talking about. So the default constructor gets called before the one in line 22. These are known as constructor initializers and they allow one constructor to call another. The other ‘myth’ I had about readonly members is that you can set it’s value only once. This was busted as well (I recall Adam and Jamie’s show). Say you’ve initialized the readonly member through a variable initializer. You can over-write this value in any of the constructors any number of times. 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha = "asdf"; 10: private readonly int Beta = 15; 11: private readonly double Delta = 0.077; 12: private readonly double Gamma = 1.0; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: Beta = 3; 26: } 27:  28: public Person(string name, DateTime dob) : this() 29: { 30: Name = name; 31: DateOfBirth = dob; 32:  33: Alpha = ";LKJ"; 34: Gamma = 0.0898; 35: } 36:  37: public Person(int uniqueNumber, string name, int age, DateTime dob, string invoiceNumber) : this() 38: { 39: UniqueNumber = uniqueNumber; 40: Name = name; 41: Age = age; 42: DateOfBirth = dob; 43: InvoiceNumber = invoiceNumber; 44:  45: Alpha = "QWER"; 46: Beta = 5; 47: Delta = 1.0; 48: Gamma = 0.0; 49: } 50: } In the above example, every constructor over-writes the values for the readonly members. This is perfectly valid. There is a possibility that based on the way the object is instantiated, the readonly member will have a different value. Well, that’s all I have for today and read this as it’s on a related topic.

    Read the article

  • "lo: Disabled Privacy Extensions" and ipv6 disabling?

    - by Smartkid
    There are lots of "lo: Disabled Privacy Extensions" in var/log/messages . I googled and find it is ipv6 releated, so I tried to disable ipv6. I added the following lines to /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 and blacklist ipv6 to /etc/modprobe.d/blacklist.conf after that, I restarted the network by /etc/init.d/networking restart . My question is: The ip addr still shows inet6 address attached to eth0 in forms like inet6 fe80::212:79ff:fecf:edaf/64 scope link Does it means my ipv6 not disabled?

    Read the article

  • How do I install Dan's Guardian on 12.04?

    - by Matt
    I'm trying to install Dans Guardian on a virtual machine. The instructions ask me to run the ./configure script and then execute the command make install. The configure script runs fine but the make install throws errors. Making all in src make[2]: Entering directory `/webmin/dansguardian-2.10/src' g++ -DHAVE_CONFIG_H -I. -I.. -D__CONFFILE='"/usr/local/etc/dansguardian/dansguardian.conf"' -D__LOGLOCATION='"/usr/local/var/log/dansguardian/"' -D__PIDDIR='"/usr/local/var/run"' -D__PROXYUSER='"nobody"' -D__PROXYGROUP='"nobody"' -D__CONFDIR='"/usr/local/etc/dansguardian"' -g -O2 -MT dansguardian-fancy.o -MD -MP -MF .deps/dansguardian-fancy.Tpo -c -o dansguardian-fancy.o `test -f 'downloadmanagers/fancy.cpp' || echo './'`downloadmanagers/fancy.cpp downloadmanagers/fancy.cpp: In member function âstd::string fancydm::timestring(int)â: downloadmanagers/fancy.cpp:507:72: error: âsnprintfâ was not declared in this scope make[2]: *** [dansguardian-fancy.o] Error 1 make[2]: Leaving directory `/webmin/dansguardian-2.10/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/webmin/dansguardian-2.10' make: *** [all] Error 2 I'm running 12.04 LTS server x64

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • JavaScript local alias pattern

    Heres a little pattern that is fairly common from JavaScript developers but that is not very well known from C# developers or people doing only occasional JavaScript development. In C#, you can use a using directive to create aliases of namespaces or bring them to the global scope: namespace Fluent.IO { using System; using System.Collections; using SystemIO = System.IO; In JavaScript, the only scoping construct there is is the function, but it can also be used as a local aliasing...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JavaScript local alias pattern

    Heres a little pattern that is fairly common from JavaScript developers but that is not very well known from C# developers or people doing only occasional JavaScript development. In C#, you can use a using directive to create aliases of namespaces or bring them to the global scope: namespace Fluent.IO { using System; using System.Collections; using SystemIO = System.IO; In JavaScript, the only scoping construct there is is the function, but it can also be used as a local aliasing...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Code refactoring with Visual Studio 2010 Part-2

    - by Jalpesh P. Vadgama
    In previous post I have written about Extract Method Code refactoring option. In this post I am going to some other code refactoring features of Visual Studio 2010.  Renaming variables and methods is one of the most difficult task for a developer. Normally we do like this. First we will rename method or variable and then we will find all the references then do remaining over that stuff. This will be become difficult if your variable or method are referenced at so many files and so many place. But once you use refactor menu rename it will be bit Easy. I am going to use same code which I have created in my previous post. I am just once again putting that code here for your reference. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Now I want to rename print method in this code. To rename the method you can select method name and then select Refactor-> Rename . Once I selected Print method and then click on rename a dialog box will appear like following. Now I am renaming this Print method to PrintMyName like following.   Now once you click OK a dialog will appear with preview of code like following. It will show preview of code. Now once you click apply. You code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So that’s it. This will work in multiple files also. Hope you liked it.. Stay tuned for more.. Till that Happy Programming.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >