Search Results

Search found 8550 results on 342 pages for 'datetime operation'.

Page 87/342 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Silverlight and PHP nuSOAP communication problem

    - by Ummar
    I am writting a silverlight application in which I want to call the php webservice written using NuSOAP. here is the WSDL of webservice <?xml version="1.0" encoding="ISO-8859-1" ?> - <definitions xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="urn:currencywebservice" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="urn:currencywebservice"> - <types> - <xsd:schema targetNamespace="urn:currencywebservice"> <xsd:import namespace="http://schemas.xmlsoap.org/soap/encoding/" /> <xsd:import namespace="http://schemas.xmlsoap.org/wsdl/" /> </xsd:schema> </types> <message name="GetAllCurrenciesRequest" /> - <message name="GetAllCurrenciesResponse"> <part name="return" type="xsd:string" /> </message> - <portType name="currencywebservicePortType"> - <operation name="GetAllCurrencies"> <documentation>Get all currencies available</documentation> <input message="tns:GetAllCurrenciesRequest" /> <output message="tns:GetAllCurrenciesResponse" /> </operation> </portType> - <binding name="currencywebserviceBinding" type="tns:currencywebservicePortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http" /> - <operation name="GetAllCurrencies"> <soap:operation soapAction="urn:currencywebservice#GetAllCurrencies" style="rpc" /> - <input> <soap:body use="literal" namespace="urn:currencywebservice" /> </input> - <output> <soap:body use="literal" namespace="urn:currencywebservice" /> </output> </operation> </binding> - <service name="currencywebservice"> - <port name="currencywebservicePort" binding="tns:currencywebserviceBinding"> <soap:address location="http://localhost/extras/currency/currencyservice.php" /> </port> </service> </definitions> When I call the webservice it gives an exception The content type text/html of response message does not match the content type of the binding (text/xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly The php side of service is <?php // Pull in the NuSOAP code require_once('../../lib/tools/nusoap/nusoap.php'); $ns = "urn:currencywebservice"; // Create the server instance $server = new soap_server(); // Initialize WSDL support $server->configureWSDL('currencywebservice', $ns); $server->xml_encoding = "utf-8"; $server->soap_defencoding = "utf-8"; $server->wsdl->schemaTargetNamespace = $ns; $server->register('GetAllCurrencies', array(), array('return' => 'xsd:string'), $ns, $ns."#GetAllCurrencies", 'rpc', 'literal', 'Get all currencies available'); // Define the method as a PHP function function GetAllCurrencies() { return "test return"; } // Use the request to (try to) invoke the service header('Content-Type: text/xml; charset=utf8'); $server->service($HTTP_RAW_POST_DATA); ?> Please help me out what is this problem?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • OpenIndiana (illumos): vmxnet3 interface lost on reboot

    - by protomouse
    I want my VMware vmxnet3 interface to be brought up with DHCP on boot. I can manually configure the NIC with: # ifconfig vmxnet3s0 plumb # ipadm create-addr -T dhcp vmxnet3s0/v4dhcp But after creating /etc/dhcp.vmxnet3s0 and rebooting, the interface is down and the logs show: Aug 13 09:34:15 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:15 neumann vmxnet3s: [ID 715698 kern.notice] vmxnet3s:0: stop() Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:17 neumann vmxnet3s: [ID 920500 kern.notice] vmxnet3s:0: start() Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(TxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxBufPoolLimit) -> 512 Aug 13 09:34:17 neumann nwamd[491]: [ID 605049 daemon.error] 1: nwamd_set_unset_link_properties: dladm_set_linkprop failed: operation not supported Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x20000) -> no Aug 13 09:34:17 neumann nwamd[491]: [ID 751932 daemon.error] 1: nwamd_down_interface: ipadm_delete_addr failed on vmxnet3s0: Object not found Aug 13 09:34:17 neumann nwamd[491]: [ID 819019 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv4 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 160156 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv6 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 771489 daemon.error] 1: add_ip_address: ipadm_create_addr failed on vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 405346 daemon.error] 9: start_dhcp: ipadm_create_addr failed for vmxnet3s0: Operation not supported on disabled object I then tried disabling network/physical:nwam in favour of network/physical:default. This works, the interface is brought up but physical:default fails and my network services (e.g. NFS) refuse to start. # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vmxnet3s0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:1: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:2: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:3: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:4: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:5: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:6: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:7: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:8: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 vmxnet3s0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 9000 index 2 inet6 ::/0 # cat /var/svc/log/network-physical\:default.log [ Aug 16 09:46:39 Enabled. ] [ Aug 16 09:46:41 Executing start method ("/lib/svc/method/net-physical"). ] [ Aug 16 09:46:41 Timeout override by svc.startd. Using infinite timeout. ] starting DHCP on primary interface vmxnet3s0 ifconfig: vmxnet3s0: DHCP is already running [ Aug 16 09:46:43 Method "start" exited with status 96. ] NFS server not running: # svcs -xv network/nfs/server svc:/network/nfs/server:default (NFS server) State: offline since August 16, 2012 09:46:40 AM UTC Reason: Service svc:/network/physical:default is not running because a method failed. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:default Reason: Service svc:/network/physical:nwam is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:nwam Reason: Service svc:/network/nfs/nlockmgr:default is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/network/nfs/nlockmgr:default See: man -M /usr/share/man -s 1M nfsd Impact: This service is not running. I'm new to the world of Solaris, so any help solving would be much appreciated. Thanks!

    Read the article

  • readonly keyword

    - by nmarun
    This is something new that I learned about the readonly keyword. Have a look at the following class: 1: public class MyClass 2: { 3: public string Name { get; set; } 4: public int Age { get; set; } 5:  6: private readonly double Delta; 7:  8: public MyClass() 9: { 10: Initializer(); 11: } 12:  13: public MyClass(string name = "", int age = 0) 14: { 15: Name = name; 16: Age = age; 17: Initializer(); 18: } 19:  20: private void Initializer() 21: { 22: Delta = 0.2; 23: } 24: } I have a couple of public properties and a private readonly member. There are two constructors – one that doesn’t take any parameters and the other takes two parameters to initialize the public properties. I’m also calling the Initializer method in both constructors to initialize the readonly member. Now when I build this, the code breaks and the Error window says: “A readonly field cannot be assigned to (except in a constructor or a variable initializer)” Two things after I read this message: It’s such a negative statement. I’d prefer something like: “A readonly field can be assigned to (or initialized) only in a constructor or through a variable initializer” But in my defense, I AM assigning it in a constructor (only indirectly). All I’m doing is creating a method that does it and calling it in a constructor. Turns out, .net was not ‘frameworked’ this way. We need to have the member initialized directly in the constructor. If you have multiple constructors, you can just use the ‘this’ keyword on all except the default constructors to call the default constructor. This default constructor can then initialize your readonly members. This will ensure you’re not repeating the code in multiple places. A snippet of what I’m talking can be seen below: 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha; 10: private readonly int Beta; 11: private readonly double Delta; 12: private readonly double Gamma; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: } 26: } See the syntax in line 22 and you’ll know what I’m talking about. So the default constructor gets called before the one in line 22. These are known as constructor initializers and they allow one constructor to call another. The other ‘myth’ I had about readonly members is that you can set it’s value only once. This was busted as well (I recall Adam and Jamie’s show). Say you’ve initialized the readonly member through a variable initializer. You can over-write this value in any of the constructors any number of times. 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha = "asdf"; 10: private readonly int Beta = 15; 11: private readonly double Delta = 0.077; 12: private readonly double Gamma = 1.0; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: Beta = 3; 26: } 27:  28: public Person(string name, DateTime dob) : this() 29: { 30: Name = name; 31: DateOfBirth = dob; 32:  33: Alpha = ";LKJ"; 34: Gamma = 0.0898; 35: } 36:  37: public Person(int uniqueNumber, string name, int age, DateTime dob, string invoiceNumber) : this() 38: { 39: UniqueNumber = uniqueNumber; 40: Name = name; 41: Age = age; 42: DateOfBirth = dob; 43: InvoiceNumber = invoiceNumber; 44:  45: Alpha = "QWER"; 46: Beta = 5; 47: Delta = 1.0; 48: Gamma = 0.0; 49: } 50: } In the above example, every constructor over-writes the values for the readonly members. This is perfectly valid. There is a possibility that based on the way the object is instantiated, the readonly member will have a different value. Well, that’s all I have for today and read this as it’s on a related topic.

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Unity 3D hangs after power blackout

    - by Yusuf
    Unity 3D was working just fine till we had a blackout in the area (I have not yet bought a UPS :(). I tried the solutions here, but it does not help. I tried issuing a unity --reset and it gives me this output: http://pastebin.com/UW83w5cG What seems to be important (to me) from that log is this part: (compiz:6510): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-05-29 21:50:41 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported For the time being, I can use Unity-2D, but I would like the 3D to work as well. Can you please help? Edit: Output when issuing $ unity --replace: http://pastebin.com/sBCPbyrT (compiz:2811): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-06-01 20:21:24 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-06-01 20:21:25 unity.screeneffectframebufferobject ScreenEffectFramebufferObject.cpp:167 unsupported internal format Segmentation fault (core dumped)

    Read the article

  • "/etc/init.d/networking restart" with non-root user

    - by bonchen
    I have a thin client with 112mb RAM which boots ubuntu server 12.04.1 from a usb drive with openbox and it is to be used by students to communicate with cisco equipment. And because of this the students need to be able to reconfigure the network interface and then restart it without a reboot using the only user - cisco. This is what I have so far: adduser cisco usermod -a -G dialout cisco chown root:cisco /etc/network/interfaces chmod 664 /etc/network/interfaces chmod u+s /sbin/shutdown chmod u+s /sbin/reboot chmod u+s /sbin/poweroff chmod u+s /run/network/if* chmod u+s /sbin/ifdown chmod u+s /sbin/ifup And when I run /etc/init.d/networking restart as cisco I get: *Reconfiguring network interfaces... rm: cannot remove `eth0.dhclient': Permission denied Failed to send flush request: Operation not permitted RTNETLINK answers: Operation not permitted Any ideas on how to get this working? Thanks!

    Read the article

  • Dlink DWA-556 Access point fails to start on 2.6.35-25 while 2.6.35-24 works. How can I do this with >2.6.35-24?

    - by Azendale
    I'm using hostapd to run an access point with a Dlink DWA-556 wireless N card. However, I can no longer get it to start when I use kernels greater than 2.6.35-24. Here's a log where I ran the uname -a&&hostapd -c <configfile> on the different kernel versions. Linux erikbandersen 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 GNU/Linux Configuration file: hostapd.conf ctrl_interface_group=0 Opening raw packet socket for ifindex 248 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf nl80211: Added 802.11b mode based on 802.11g information HT40: control channel: 2 secondary channel: 6 RATE[0] rate=10 flags=0x2 RATE[1] rate=20 flags=0x6 RATE[2] rate=55 flags=0x6 RATE[3] rate=110 flags=0x6 RATE[4] rate=60 flags=0x0 RATE[5] rate=90 flags=0x0 RATE[6] rate=120 flags=0x0 RATE[7] rate=180 flags=0x0 RATE[8] rate=240 flags=0x0 RATE[9] rate=360 flags=0x0 RATE[10] rate=480 flags=0x0 RATE[11] rate=540 flags=0x0 Passive scanning not supported Mode: IEEE 802.11g Channel: 2 Frequency: 2417 MHz Flushing old station entries Deauthenticate all stations Using interface wlan1 with hwaddr 1c:bd:b9:d5:e8:3c and ssid 'erikbandersen.com/freewifi' wlan1: Setup of interface done. MGMT (TX callback) ACK Malformed netlink message: len=436 left=256 plen=420 256 extra bytes in the end of netlink message MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb mgmt::auth authentication: STA=3c:4a:92:0e:41:2f auth_alg=0 auth_transaction=1 status_code=0 wep=0 New STA wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: authentication OK (open system) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-AUTHENTICATE.indication(3c:4a:92:0e:41:2f, OPEN_SYSTEM) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) authentication reply: STA=3c:4a:92:0e:41:2f auth_alg=0 auth_transaction=2 resp=0 (IE len=0) MGMT (TX callback) ACK mgmt::auth cb wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: authenticated mgmt::assoc_req association request: STA=3c:4a:92:0e:41:2f capab_info=0x421 listen_interval=10 Validating WMM IE: OUI 00:50:f2 OUI type 2 OUI sub-type 0 version 1 QoS info 0x0 HT: STA 3c:4a:92:0e:41:2f HT Capabilities Info: 0x102c handle_assoc STA 3c:4a:92:0e:41:2f - no greenfield, num of non-gf stations 1 handle_assoc STA 3c:4a:92:0e:41:2f - 20 MHz HT, num of 20MHz HT STAs 1 hostapd_ht_operation_update current operation mode=0x0 hostapd_ht_operation_update new operation mode=0x7 changes=2 new AID 1 wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: association OK (aid 1) MGMT (TX callback) ACK mgmt::assoc_resp cb wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: associated (aid 1) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-ASSOCIATE.indication(3c:4a:92:0e:41:2f) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) wlan1: STA 3c:4a:92:0e:41:2f RADIUS: starting accounting session 4DAC8224-00000000 MGMT (TX callback) ACK mgmt::action cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb Signal 2 received - terminating wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DEAUTHENTICATE.indication(3c:4a:92:0e:41:2f, 1) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) Removing station 3c:4a:92:0e:41:2f hostapd_ht_operation_update current operation mode=0x7 hostapd_ht_operation_update new operation mode=0x0 changes=2 Flushing old station entries Deauthenticate all stations . Linux erikbandersen 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux Configuration file: hostapd.conf ctrl_interface_group=0 Opening raw packet socket for ifindex 248 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf nl80211: Added 802.11b mode based on 802.11g information Allowed channel: mode=1 chan=1 freq=2412 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=2 freq=2417 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=3 freq=2422 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=4 freq=2427 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=5 freq=2432 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=6 freq=2437 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=7 freq=2442 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=8 freq=2447 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=9 freq=2452 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=10 freq=2457 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=11 freq=2462 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=1 freq=2412 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=2 freq=2417 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=3 freq=2422 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=4 freq=2427 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=5 freq=2432 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=6 freq=2437 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=7 freq=2442 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=8 freq=2447 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=9 freq=2452 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=10 freq=2457 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=11 freq=2462 MHz max_tx_power=27 dBm HT40: control channel: 2 secondary channel: 6 RATE[0] rate=10 flags=0x2 RATE[1] rate=20 flags=0x6 RATE[2] rate=55 flags=0x6 RATE[3] rate=110 flags=0x6 RATE[4] rate=60 flags=0x0 RATE[5] rate=90 flags=0x0 RATE[6] rate=120 flags=0x0 RATE[7] rate=180 flags=0x0 RATE[8] rate=240 flags=0x0 RATE[9] rate=360 flags=0x0 RATE[10] rate=480 flags=0x0 RATE[11] rate=540 flags=0x0 Passive scanning not supported Mode: IEEE 802.11g Channel: 2 Frequency: 2417 MHz Could not set channel for kernel driver wlan1: Unable to setup interface. My wireless card is listed as 02:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) by lspci. Am I doing it wrong and there's a new way of doing it? I'm holding off upgrading to Natty because of this. What changed between the versions that would cause this? Should I report it as a bug?

    Read the article

  • Bash prompt doesn't print until I interact with console again

    - by durron597
    I don't even know where to begin to diagnose this one. Usually, when a command finishes, the prompt prints itself for the next command. However, that is not happening. Hard to explain with words, I'll just use an example: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . After waiting several seconds (far too long for this operation on a small file) I press Enter, and see: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . User@Machine:~$ User@Machine:~$ So clearly the operation had finished, but the prompt didn't display... until I pressed enter, and then BOTH prompts instantly displayed. This error does not happen with commands like cd.

    Read the article

  • Gtk warning when opening Gedit in terminal

    - by dellphi
    Previously, I need to clear documents history, so I Googled and found this: http://www.watchingthenet.com/ubuntu-tip-clear-disable-recent-documents.html I did the step, and then when I opened gedit in root terminal, I've got this: root@dellph1-desktop:/# gedit (gedit:8224): GLib-CRITICAL **: g_bookmark_file_load_from_data: assertion `length != 0' failed (gedit:8224): Gtk-WARNING **: Attempting to store changes into `/root/.recently-used.xbel', but failed: Failed to rename file '/root/.recently-used.xbel.FP7PPV' to '/root/.recently-used.xbel': g_rename() failed: Operation not permitted (gedit:8224): Gtk-WARNING **: Attempting to set the permissions of `/root/.recently-used.xbel', but failed: Operation not permitted root@dellph1-desktop:/# And it's happpened in user terminal: dellph1@dellph1-desktop:~$ gedit (gedit:9408): Gtk-CRITICAL **: gtk_accel_label_set_accel_closure: assertion `gtk_accel_group_from_accel_closure (accel_closure) != NULL' failed dellph1@dellph1-desktop:~$ I really hope someone helps in this case, thank you.

    Read the article

  • Calling Web Services using ADF 11g

    - by James Taylor
    One of the benefits of ADF is that fact that it can use multiple data sources. With SOA playing a big part in today’s IT landscape, applications need to be able to utilise this SOA framework to leverage functionality from multiple systems to provide a composite application. ADF provides functionality to expose web services via the ADF Business Component so if you know how to use Business Components for a database. Configuring ADF for web services is much the same. In this example I use an OSB web service that gets a customer. Create a new Fusion Web Application (ADF) Application and click OK    Provide an Application Name, GetCustomerADF and click Next    From the Project Technologies move Web Services into the Selected box. Accept the defaults and click Finish. Right-click the Model project and select New In the Gallery select Web Services –> Web Service Data Control then click OK. Provide a name GetCustomerDC and give the URL endpoint for the Web Service, then click Next Select the web service operation you want to use for the ADF application. In my example my web service only has one operation. Click Finish Save your work, File –> Save The data control has now been created, the next steps create the UI components. In your application created in step 1 find the ViewController project, right-click and choose New In the Gallery select JSF –> JSF Page Provide a name for the jsp page, GetCustomer, Also ensure that the ‘Create as XML Document (*.jsp) check box is checked. I have selected the page template, Oracle Three Column Layout but you can create a layout of your choice. I only want 2 columns so I delete the last column but right-clicking the right had panel and selecting Delete Drag the fields you require from the web service data control to the left pannel. In my example I only require the Customer ID. When you drag to the panel select Texts –>ADF Input Text w/Label In this example I want to search on a customer based on the ID. So Once I select the ID I want to execute the request. To do this I need a button. Drag the operation object under the fields created in step 15. Select Methods –> ADF Button. You now need to provide the mappings, Choose the ‘Show EI Expression Builder’ Navigate to the bindings, ADFBindings –> bindings –> parametersIterator –> currentRow Click OK Drag and drop the return information I just want the results shown in a form. I want to show all fields Now it is time to test, Right-click the jspx page created in steps 11 – 21 and select Run A browser should start, enter valid values and test  

    Read the article

  • What changed with timidity, alsa and jack in 11.10?

    - by Dave
    I (just) upgraded from 11.4 to 11.10 and noticed some differences in the behavior of timidity. I used to (11.4) exectute >timidity midifile.midi without running jackd, and thus using alsa (or pulseaudio?) to produce sound from midi files. Now having upgraded, this does not work -- currently this command just freezes if jack is not running. If jack is running, it does work but there is an initial audio glitch (noise burst at the start of playback, analogous to the sound of a plug being inserted) that I'd rather not have to deal with. All the indications that I have is that in 11.10 timidity will only work (albeit glitchy) with jack on, whereas in 11.4 it did not require this. Is there any way to restore timidity's non-jack operation in 11.10? Is there a way to get rid of the audio glitch in with jack operation? Overall, what underlying changes in these programs and the audio infrastructure are behind this?

    Read the article

  • MOSS 2010 Deploy Farm Solution using STSADM

    - by H(at)Ni
    Today, I've been working on deploying farm solutions to another farm and I was surprised that it can only be done using STSADM.exe. Below are the steps that I've done to get it to work : 1. Use the command addsolution  and give it the path of the wsp file which was something like that : stsadm -o addsolution -filename C:\MySolution.wsp 2. Use the command deploysolution and give the solution name as a parameter like that : stsadm -o deploysolution -name MySolution.wsp -immediate -allowgacdeployment If then you encountered an error saying : The timer job for this operation has been created, but it will fail because the administrative service for this server is not enabled. If the timer job is sched uled to run at a later time, you can run the jobs all at once using stsadm.exe - o execadmsvcjobs. To avoid this problem in the future, enable the Microsoft Shar ePoint Foundation administrative service, or run your operation through the STSA DM.exe command line utility. then use the following command to enforce the execution of your deployment: Start-SPAdminJob And that's it, you'll have it working as expected :)

    Read the article

  • Prevent your Silverlight XAP file from caching in your browser.

    - by mbcrump
    If you work with Silverlight daily then you have run into this problem. Your XAP file has been cached in your browser and you have to empty your browser cache to resolve it. If your using Google Chrome then you typically do the following: Go to Options –> Clear Browsing History –> Empty the Cache and finally click Clear Browsing data. As you can see, this is a lot of unnecessary steps. It is even worse when you have a customer that says, “I can’t see the new features you just implemented!” and you realize it’s a cached xap problem.  I have been struggling with a way to prevent my XAP file from caching inside of a browser for a while now and decided to implement the following solution. If the Visual Studio Debugger is attached then add a unique query string to the source param to force the XAP file to be refreshed. If the Visual Studio Debugger is not attached then add the source param as Visual Studio generates it. This is also in case I forget to remove the above code in my production environment. I want the ASP.NET code to be inline with my .ASPX page. (I do not want a separate code behind .cs page or .vb page attached to the .aspx page.) Below is an example of the hosting code generated when you create a new Silverlight project. As a quick refresher, the hard coded param name = “source” specifies the location of your XAP file.  <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/SilverlightApplication2.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We are going to use a little bit of inline ASP.NET to generate the param name = source dynamically to prevent the XAP file from caching. Lets look at the completed solution: <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) //Debugger Attached - Refresh the XAP file. param = "<param name=\"source\" value=\"" + strSourceFile + "?" + DateTime.Now.Ticks + "\" />"; else { //Production Mode param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; } Response.Write(param); %> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We add the location to our XAP file to strSourceFile and if the debugger is attached then it will append DateTime.Now.Ticks to the XAP file source and force the browser to download the .XAP. If you view the page source of your Silverlight Application then you can verify it worked properly by looking at the param name = “source” tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap?634299001187160148" /> If the debugger is not attached then it will use the standard source tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap"/> At this point you may be asking, How do I prevent my XAP file from being cached on my production app? Well, you have two easy options: 1) I really don’t recommend this approach but you can force the XAP to be refreshed everytime with the following code snippet.  <param name="source" value="ClientBin/SilverlightApplication2.xap?<%=Guid.NewGuid().ToString() %>"/> NOTE: You could also substitute the “Guid.NewGuid().ToString() for anything that create a random field. (I used DateTime.Now.Ticks earlier). 2) Another solution that I like even better involves checking the XAP Creation Date and appending it to the param name = source. This method was described by Lars Holm Jenson. <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; else { string xappath = HttpContext.Current.Server.MapPath(@"") + @"\" + strSourceFile; DateTime xapCreationDate = System.IO.File.GetLastWriteTime(xappath); param = "<param name=\"source\" value=\"" + strSourceFile + "?ignore=" + xapCreationDate.ToString() + "\" />"; } Response.Write(param); %> As you can see, this problem has been solved. It will work with all web browsers and stubborn proxy servers that are caching your .XAP. If you enjoyed this article then check out my blog for others like this. You may also want to subscribe to my blog or follow me on Twitter.   Subscribe to my feed

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • How to commit a file conversion?

    - by l0b0
    Say you've committed a file of type foo in your favorite vcs: $ vcs add data.foo $ vcs commit -m "My data" After publishing you realize there's a better data format bar. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m "Preparing to use format bar" $ foo2bar --output data.bar data.bar $ vcs commit -m "Actual foo to bar conversion" or $ foo2bar --output data.foo data.foo $ vcs commit -m "Converted data to format bar" $ vcs mv data.foo data.bar $ vcs commit -m "Renamed to fit data type" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m "Converted data to format bar" In the first two cases the conversion is not an atomic operation and the file extension is "lying" in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be difficult to trace the file history across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?

    Read the article

  • How to design database having multiple interrelated entities

    - by Sharath Chandra
    I am designing a new system which is more of a help system for core applications in banks or healthcare sector. Given the nature of the system this is not a heavy transaction oriented system but more of read intensive. Now within this application I have multiple entities which are related to each other. For e.g. Assume the following entities in the system User Training Regulations Now each of these entities have M:N Relationship with each other. Assuming the usage of a standard RDBMS, the design may involve many relationship tables each containing the relationships one other entity ("User_Training", "User_Regulations", "Training_Regulations"). This design is limiting since I have more than 3 entities in the system and maintaining the relationship graph is difficult this way. The most frequently used operation is "given an entity get me all the related entities" . I need to design the database where this operation is relatively inexpensive. What are the different recommendations for modelling this kind of database.

    Read the article

  • Why you need to learn async in .NET

    - by PSteele
    I had an opportunity to teach a quick class yesterday about what’s new in .NET 4.0.  One of the topics was the TPL (Task Parallel Library) and how it can make async programming easier.  I also stressed that this is the direction Microsoft is going with for C# 5.0 and learning the TPL will greatly benefit their understanding of the new async stuff.  We had a little time left over and I was able to show some code that uses the Async CTP to accomplish some stuff, but it wasn’t a simple demo that you could jump in to and understand so I thought I’d thrown one together and put it in a blog post. The entire solution file with all of the sample projects is located here. A Simple Example Let’s start with a super-simple example (WindowsApplication01 in the solution). I’ve got a form that displays a label and a button.  When the user clicks the button, I want to start displaying the current time for 15 seconds and then stop. What I’d like to write is this: lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); Thread.Sleep(1000); } lblTime.ForeColor = SystemColors.ControlText; (Note that I also changed the label’s color while counting – not quite an ILM-level effect, but it adds something to the demo!) As I’m sure most of my readers are aware, you can’t write WinForms code this way.  WinForms apps, by default, only have one thread running and it’s main job is to process messages from the windows message pump (for a more thorough explanation, see my Visual Studio Magazine article on multithreading in WinForms).  If you put a Thread.Sleep in the middle of that code, your UI will be locked up and unresponsive for those 15 seconds.  Not a good UX and something that needs to be fixed.  Sure, I could throw an “Application.DoEvents()” in there, but that’s hacky. The Windows Timer Then I think, “I can solve that.  I’ll use the Windows Timer to handle the timing in the background and simply notify me when the time has changed”.  Let’s see how I could accomplish this with a Windows timer (WindowsApplication02 in the solution): public partial class Form1 : Form { private readonly Timer clockTimer; private int counter;   public Form1() { InitializeComponent(); clockTimer = new Timer {Interval = 1000}; clockTimer.Tick += UpdateLabel; }   private void UpdateLabel(object sender, EventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); counter++; if (counter == 15) { clockTimer.Enabled = false; lblTime.ForeColor = SystemColors.ControlText; } }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; counter = 0; clockTimer.Start(); } } Holy cow – things got pretty complicated here.  I use the timer to fire off a Tick event every second.  Inside there, I can update the label.  Granted, I can’t use a simple for/loop and have to maintain a global counter for the number of iterations.  And my “end” code (when the loop is finished) is now buried inside the bottom of the Tick event (inside an “if” statement).  I do, however, get a responsive application that doesn’t hang or stop repainting while the 15 seconds are ticking away. But doesn’t .NET have something that makes background processing easier? The BackgroundWorker Next I try .NET’s BackgroundWorker component – it’s specifically designed to do processing in a background thread (leaving the UI thread free to process the windows message pump) and allows updates to be performed on the main UI thread (WindowsApplication03 in the solution): public partial class Form1 : Form { private readonly BackgroundWorker worker;   public Form1() { InitializeComponent(); worker = new BackgroundWorker {WorkerReportsProgress = true}; worker.DoWork += StartUpdating; worker.ProgressChanged += UpdateLabel; worker.RunWorkerCompleted += ResetLabelColor; }   private void StartUpdating(object sender, DoWorkEventArgs e) { var workerObject = (BackgroundWorker) sender; for (int x = 0; x < 15; x++) { workerObject.ReportProgress(0); Thread.Sleep(1000); } }   private void UpdateLabel(object sender, ProgressChangedEventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); }   private void ResetLabelColor(object sender, RunWorkerCompletedEventArgs e) { lblTime.ForeColor = SystemColors.ControlText; }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; worker.RunWorkerAsync(); } } Well, this got a little better (I think).  At least I now have my simple for/next loop back.  Unfortunately, I’m still dealing with event handlers spread throughout my code to co-ordinate all of this stuff in the right order. Time to look into the future. The async way Using the Async CTP, I can go back to much simpler code (WindowsApplication04 in the solution): private async void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); await TaskEx.Delay(1000); } lblTime.ForeColor = SystemColors.ControlText; } This code will run just like the Timer or BackgroundWorker versions – fully responsive during the updates – yet is way easier to implement.  In fact, it’s almost a line-for-line copy of the original version of this code.  All of the async plumbing is handled by the compiler and the framework.  My code goes back to representing the “what” of what I want to do, not the “how”. I urge you to download the Async CTP.  All you need is .NET 4.0 and Visual Studio 2010 sp1 – no need to set up a virtual machine with the VS2011 beta (unless, of course, you want to dive right in to the C# 5.0 stuff!).  Starting playing around with this today and see how much easier it will be in the future to write async-enabled applications.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Ubuntu install doesn't recognize kingston SSDnow sv 200 90g

    - by Quique Garcia
    I'm trying install Ubuntu 12.04 on my new Kingston SSD Now SVP200S3/90G, but the installer doesn't list this disk to be installed. However gparted let me do any partitioning operation correctly. I've tryed with Ubuntu Studio and Ubuntu Desktop on different machines: Lenovo ThinkPad T400, Lenovo ThinkStation C20, MoBo ASUS Crosshair V formula. SATA controller in all machines has been set to AHCI and IDE operation and different SATA ports with no luck. The firmware on the SSD has been updated to V.503xxx (latest on kingston page) Any help is appreciated, regards!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >