Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 184/331 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • META Tags in SEO - Should You Use Them?

    Onsite SEO is going by the wayside and not being used by the search engines much anymore but it may still benefit you to optimize each of your web pages. The search engines have a formula to determine keyword relevancy on each page of your web site. The technical term is an algorithm, which each engine has its own unique algorithm that it uses to rank pages.

    Read the article

  • What is Google Page Rank and Why is it So Important?

    What exactly is PageRank? It is basically a link analysis algorithm, which was influenced by citation analysis, which dates way back to the fifties, when it was conceived by Eugene Garfield and later on by Massimo Marchiori. This link analysis algorithm essentially gives set of hyperlinked documents, where they are weighed in numerical form, and are given a number assignment between zero to ten.

    Read the article

  • How can I plot a radius of all reachable points with pathfinding for a Mob (XNA)?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • What's the most useful piece of logic you use? [closed]

    - by nomaderWhat
    Just tapping away and automatically made use of that one logic algorithm I always use, which you'll probably guess is: (not A) OR (not B) = not (A AND B) Curious if this is the case for just about everyone else, and more curious if there is a different algorithm that other programmers make use of that is just as if not more useful to them. Really hoping for the latter, cause the one above is well... so boring.

    Read the article

  • SEO News - Mayday One Month On

    What affect has the 'Mayday' change to Google's Algorithm had to your Longtail SEO campaign? For the uninitiated, and I am sure there are many, I will just explain what I mean by 'Mayday'. At the end of April / beginning of May Google made a 'slight' change to its algorithm.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • In WCF How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • In a WCF Client How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • What are the implications of Nvidia's "the way it's meant to be played"?

    - by Mike Pateras
    I have an AMD Radeon 5850 (about to be 2), and today I read that Rift is a member of Nvidia's "the way it's meant to be played" program. It was suggested that as such the developers would not be speaking with or working directly with AMD for optimization, and that it would be unlikely that Crossfire support would be added until the game's release. Are any of these implications likely? Or does it just mean that Nvidia is working closely with the developers for optimization and marketing support?

    Read the article

  • mod_deflate Supported Encodings for Compression

    - by sparc
    It seems to me, that mod_deflate in Apache 2.2 will always return: Content-Encoding: gzip and never: Content-Encoding: deflate It was explained to me, that although there may be a deflate algorithm, mod_deflate is named after a file-format, in which the algorithm could be any of: gzip, bzip. pkzip Of those three, mod_deflate provides gzip. It seems as though gzip is the most popular and widely-supported algorithm in web browsers, but I know some web servers and proxies do return Content-Encoding: deflate. Aside from the confusion of the module's name, it true that mod_deflate will only return Content-Encoding: gzip? Thank you.

    Read the article

  • What Simple Changes Made the Biggest Improvements to Your Delphi Programs

    - by lkessler
    I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory. What small simple changes have you made to your Delphi code that had the biggest impact on the performance of you program by noticeably reducing execution time or memory use? Thanks everyone for all your answers. Many great tips. For completeness, I'll post a few important articles on Delphi optimization that I found. Before you start optimizing Delphi code at About.com Speed and Size: Top 10 Tricks also at About.com Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.

    Read the article

  • How does braking assist of car racing games work?

    - by Ayush Khemka
    There are a lot of PC car racing games around which have this unique driving assist which helps brake your car so that you can safely turn it. While in some games it just an 'assist', it will just help your car brake but won't ensure a safe turn. While in others, the braking assist will help you get a safe turn. I was wondering on what could be the algorithm that is followed to achieve it. A very basic algorithm I could think of was, Pre-determine the braking distance of an ideal car for every turn of the track, depending on the radius of the turn, and then start braking the car accordingly. For example, for a turn of less than 90o, the car would start braking automatically at 50m distance from the start of the turn. A more advanced algorithm, which would ensure a safe turn, could be Pre-determine the speed of the car at the start of each turn, individually for each track, turn and car. Also, pre-determine the deceleration rate of each car individually, which varies because of the car's performance. The braking assist would keep recording the speed of the car at a certain instant of time. Start braking the car appropriately so that the car gets to the exact speed needed at the start of the turn. For example, let the speed of a particular car at the start of a turn 43m in radius, be 120km/h. Let the deceleration rate of the car be 200km/h2. If, at some instant of time, the speed of the car is 200km/h, then the car would automatically start braking at 400m from the start of the turn.

    Read the article

  • Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project?

    - by Chris Barry
    There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question

    Read the article

  • How to find the average color of an image.

    - by Edward Boyle
    Years ago I was the lead developer of a large Scrapbook Web Site. One of the things I implemented was to allow shoppers to find Scrapbook papers and embellishments of like colors (“more like this color”). Below is the base algorithm I wrote to extract the color from an image. It worked out pretty well. I took the returned values and stored them in an associated table for the products. Yet another algorithm was used to SELECT near matches. This algorithm has turned out to be very handy for me. I have used it for borders and subtle outlined text overlays. I am sure you will find more creative uses for it. Enjoy… private Color GetColor(Bitmap bmp) { int r = 0; int g = 0; int b = 0; Color mColor = System.Drawing.Color.White; for (int i = 1; i < bmp.Width; i++) { for (int x = 1; x < bmp.Height; x++) { mColor = bmp.GetPixel(i, x); r += mColor.R; g += mColor.G; b += mColor.B; } } r = (r / (bmp.Height * bmp.Width)); g = (g / (bmp.Height * bmp.Width)); b = (b / (bmp.Height * bmp.Width)); return System.Drawing.Color.FromArgb(r, g, b); } You could also get the RGB values by passing in the RGB by ref private Color GetColor(ref int r, ref int g, ref int b, Bitmap bmp) but that is a bit much as you can simply get it from the return value: mReturnedColor.R; mReturnedColor.G; mReturnedColor.B;

    Read the article

  • Continuous Physics Engine's Collision Detection Techniques

    - by Griffin
    I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI. Broad Phase The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling. I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does? Narrow Phase I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across. What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each? Final Note: I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • How to practice object oriented programming?

    - by user1620696
    I've always programmed in procedural languages and currently I'm moving towards object orientation. The main problem I've faced is that I can't see a way to practice object orientation in an effective way. I'll explain my point. When I've learned PHP and C it was pretty easy to practice: it was just matter of choosing something and thinking about an algorithm for that thing. In PHP for example, it was matter os sitting down and thinking: "well, just to practice, let me build one application with an administration area where people can add products". This was pretty easy, it was matter of thinking of an algorithm to register some user, to login the user, and to add the products. Combining these with PHP features, it was a good way to practice. Now, in object orientation we have lots of additional things. It's not just a matter of thinking about an algorithm, but analysing requirements deeper, writing use cases, figuring out class diagrams, properties and methods, setting up dependency injection and lots of things. The main point is that in the way I've been learning object orientation it seems that a good design is crucial, while in procedural languages one vague idea was enough. I'm not saying that in procedural languages we can write good software without design, just that for sake of practicing it is feasible, while in object orientation it seems not feasible to go without a good design, even for practicing. This seems to be a problem, because if each time I'm going to practice I need to figure out tons of requirements, use cases and so on, it seems to become not a good way to become better at object orientation, because this requires me to have one whole idea for an app everytime I'm going to practice. Because of that, what's a good way to practice object orientation?

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • String patterns that can be used to filter and group files

    - by Louis Rhys
    One of our application filters files in certain directory, extract some data from it and export a document from the extracted data. The algorithm for extracting the data depends on the file, and so far we use regex to select the algorithm to be used, for example .*\.txt will be processed by algorithm A, foo[0-5]\.xml will be processed by algo B, etc. However now we need some files to be processed together. For example, in one case we need two files, foo.*\.xml and bar.*\.xml. Part of the information to be extracted exist in the foo file, and the other part in the bar file. Moreover, we need to make sure the wild card is compatible. For example, if there are 6 files foo1.xml foo23.xml bar1.xml bar9.xml bar23.xml foo4.xml I would expect foo1 and bar1 to be identified as a group, and foo23 and bar23 as another group. bar9 and foo4 has no pair, so they will not be treated. Now, since the filter is configured by user, we need to have a pattern that can express the above requirement. I don't think you can express meaning like above in standard regex. (foo|bar).*\.xml will match all 6 file above and we can't identify which file is paired for a particular file. Is there any standard pattern that can express it? Or any idea how to modify regex to support this, that can be implemented easily?

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • Obstacle Avoidance steering behavior: how can an entity avoid an obstacle while other forces are acting on the entity?

    - by Prog
    I'm trying to implement the Obstacle Avoidance steering behavior in my 2D game. Currently my approach is to apply a force on the entity, in the direction of the normal of the heading, scaled by a number that gets bigger the closer we are to the obstacle. This is supposed to push the entity to the side and avoid the obstacle that blocks it's way. However, in the same time that my entity tries to avoid an obstacle, it Seeks to a point more or less behind the obstacle (which is the reason it needs to avoid the obstacle in the first place). The Seek algorithm constantly applies a force on the entity that pushes it (more or less) in the direction of the obstacle, while the Obstacle Avoidance algorithm constantly applies a force that pushes the entity away (more accurately, to the side) of the obstacle. The result is that sometimes the entity succesfully avoids the obstacle, and sometimes it collides with it, depending on the strength of the avoidance force I'm applying. How can I make sure that a force will succeed in steering the entity in some direction, while other forces are currently acting on the entity? (And while still looking natural). I can't allow entities to collide with obstacles when realistically they should be able to easily avoid them, doesn't matter what they're currently doing. Also, the Obstacle Avoidance algorithm is made exactly for the case where another force is acting on the entity. Otherwise it wouldn't be moving and there would be no need to avoid anything. So maybe I'm missing something. Thanks

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Mechanics of reasoning during programming interviews

    - by user129506
    This is not the usual "I don't want to write code during an interview", in this question the assumption is that I need to write code during an interview (think about the level of rewriting the quicksort or mergesort from scratch) I know how the algorithm work or I have a basic idea of how I should start working from there, i.e. I don't remember the algorithm by heart I noticed that even on a whiteboard, I always end up writing bugged code or code that doesn't compile. If there's a typo, whatever I usually live with that.. but when there's a crash due to some uncaught particular case I end up losing confidence in my skills. I realize that perhaps interviewers might want to look at how I write code and/or how I solve problems rather than proof-compiling my whiteboard code, but I'd like to ask how should I approach the above problem in mental terms, i.e. what mental steps should I follow when writing code for an interview with the two bullet points above. There must be a unique and agreed series of steps I should follow to avoid getting stuck/caught into particular exception cases (limit cases) that might end up wasting my time and my energies rather than focusing on the overall algorithm for the general case. I hope I made my point clear

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >