Search Results

Search found 32185 results on 1288 pages for 'row level security'.

Page 24/1288 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Can't connect to SSL web service with WS-Security using PHP SOAP extension - certificate, complex WSDL

    - by BillF
    Using the PHP5 SOAP extension I have been unable to connect to a web service having an https endpoint, with client certificate and using WS-Security, although I can connect using soapUI with the exact same wsdl and client certificate, and obtain the normal response to the request. There is no HTTP authentication and no proxy is involved. The message I get is 'Could not connect to host'. Have been able to verify that I am NOT hitting the host server. (Earlier I wrongly said that I was hitting the server.) The self-signed client SSL certificate is a .pem file converted by openssl from a .p12 keystore which in turn was converted by keytool from a .jks keystore having a single entry consisting of private key and client certificate. In soapUI I did not need to supply a server private certificate, the only two files I gave it were the wdsl and pem. I did have to supply the pem and its passphrase to be able to connect. I am speculating that despite the error message my problem might actually be in the formation of the XML request rather than the SSL connection itself. The wsdl I have been given has nested complex types. The php server is on my Windows XP laptop with IIS. The code, data values and WSDL extracts are shown below. (The WSSoapClient class simply extends SoapClient, adding a WS-Security Username Token header with mustUnderstand = true and including a nonce, both of which the soapUI call had required.) Would so much appreciate any help. I'm a newbie thrown in at the deep end, and how! Have done vast amounts of Googling on this over many days, following many suggestions and have read Pro PHP by Kevin McArthur. An attempt to use classmaps in place of nested arrays also fell flat. The Code class STEeService { public function invokeWebService(array $connection, $operation, array $request) { try { $localCertificateFilespec = $connection['localCertificateFilespec']; $localCertificatePassphrase = $connection['localCertificatePassphrase']; $sslOptions = array( 'ssl' => array( 'local_cert' => $localCertificateFilespec, 'passphrase' => $localCertificatePassphrase, 'allow_self-signed' => true, 'verify_peer' => false ) ); $sslContext = stream_context_create($sslOptions); $clientArguments = array( 'stream_context' => $sslContext, 'local_cert' => $localCertificateFilespec, 'passphrase' => $localCertificatePassphrase, 'trace' => true, 'exceptions' => true, 'encoding' => 'UTF-8', 'soap_version' => SOAP_1_1 ); $oClient = new WSSoapClient($connection['wsdlFilespec'], $clientArguments); $oClient->__setUsernameToken($connection['username'], $connection['password']); return $oClient->__soapCall($operation, $request); } catch (exception $e) { throw new Exception("Exception in eServices " . $operation . " ," . $e->getMessage(), "\n"); } } } $connection is as follows: array(5) { ["username"]=> string(8) "DFU00050" ["password"]=> string(10) "Fabricate1" ["wsdlFilespec"]=> string (63) "c:/inetpub/wwwroot/DMZExternalService_Concrete_WSDL_Staging.xml" ["localCertificateFilespec"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["localCertificatePassphrase"]=> string(14) "password123456" } $clientArguments is as follows: array(7) { ["stream_context"]=> resource(8) of type (stream-context) ["local_cert"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["passphrase"]=> string(14) "password123456" ["trace"]=> bool(true) ["exceptions"]=> bool(true) ["encoding"]=> string(5) "UTF-8" ["soap_version"]=> int(1) } $operation is as follows: 'getConsignmentDetails' $request is as follows: array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } Note how there is an extra level of nesting, an array wrapping the request which is itself an array. This was suggested in a post although I don't see the reason, but it seems to help avoid other exceptions. The exception thrown by ___soapCall is as follows: object(SoapFault)#6 (9) { ["message":protected]=> string(25) "Could not connect to host" ["string":"Exception":private]=> string(0) "" ["code":protected]=> int(0) ["file":protected]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line":protected]=> int(85) ["trace":"Exception":private]=> array(5) { [0]=> array(6) { ["file"]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line"]=> int(85) ["function"]=> string(11) "__doRequest" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(4) { [0]=> string(1240) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z customerA10072906GKQ00000085 " [1]=> string(127) "https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1" [2]=> string(104) "/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1/getConsignmentDetails" [3]=> int(1) } } [1]=> array(4) { ["function"]=> string(11) "__doRequest" ["class"]=> string(39) "startrackexpress\eservices\WSSoapClient" ["type"]=> string(2) "->" ["args"]=> array(5) { [0]=> string(1240) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z customerA10072906GKQ00000085 " [1]=> string(127) "https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1" [2]=> string(104) "/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1/getConsignmentDetails" [3]=> int(1) [4]=> int(0) } } [2]=> array(6) { ["file"]=> string(43) "C:\Inetpub\wwwroot\eServices\WSSecurity.php" ["line"]=> int(70) ["function"]=> string(10) "__soapCall" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(4) { [0]=> string(21) "getConsignmentDetails" [1]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } [2]=> NULL [3]=> object(SoapHeader)#5 (4) { ["namespace"]=> string(81) "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" ["name"]=> string(8) "Security" ["data"]=> object(SoapVar)#4 (2) { ["enc_type"]=> int(147) ["enc_value"]=> string(594) " DFU00050 Fabricate1 E0ByMUA= 2010-10-28T13:13:52Z " } ["mustUnderstand"]=> bool(true) } } } [3]=> array(6) { ["file"]=> string(42) "C:\Inetpub\wwwroot\eServices\eServices.php" ["line"]=> int(87) ["function"]=> string(10) "__soapCall" ["class"]=> string(39) "startrackexpress\eservices\WSSoapClient" ["type"]=> string(2) "->" ["args"]=> array(2) { [0]=> string(21) "getConsignmentDetails" [1]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } } } [4]=> array(6) { ["file"]=> string(58) "C:\Inetpub\wwwroot\eServices\EnquireConsignmentDetails.php" ["line"]=> int(44) ["function"]=> string(16) "invokeWebService" ["class"]=> string(38) "startrackexpress\eservices\STEeService" ["type"]=> string(2) "->" ["args"]=> array(3) { [0]=> array(5) { ["username"]=> string(10) "DFU00050 " ["password"]=> string(12) "Fabricate1 " ["wsdlFilespec"]=> string(63) "c:/inetpub/wwwroot/DMZExternalService_Concrete_WSDL_Staging.xml" ["localCertificateFilespec"]=> string(37) "c:/inetpub/wwwroot/ClientKeystore.pem" ["localCertificatePassphrase"]=> string(14) "password123456" } [1]=> string(21) "getConsignmentDetails" [2]=> array(1) { [0]=> array(2) { ["header"]=> array(2) { ["source"]=> string(9) "customerA" ["accountNo"]=> string(8) "10072906" } ["consignmentId"]=> string(11) "GKQ00000085" } } } } } ["previous":"Exception":private]=> NULL ["faultstring"]=> string(25) "Could not connect to host" ["faultcode"]=> string(4) "HTTP" } Here are some WSDL extracts (TIBCO BusinessWorks): <xsd:complexType name="TransactionHeaderType"> <xsd:sequence> <xsd:element name="source" type="xsd:string"/> <xsd:element name="accountNo" type="xsd:integer"/> <xsd:element name="userId" type="xsd:string" minOccurs="0"/> <xsd:element name="transactionId" type="xsd:string" minOccurs="0"/> <xsd:element name="transactionDatetime" type="xsd:dateTime" minOccurs="0"/> </xsd:sequence> </xsd:complexType> <xsd:element name="getConsignmentDetailRequest"> <xsd:complexType> <xsd:sequence> <xsd:element name="header" type="prim:TransactionHeaderType"/> <xsd:element name="consignmentId" type="prim:ID" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="consignment" type="freight:consignmentType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailRequest"> <xsd:complexType> <xsd:sequence> <xsd:element name="header" type="prim:TransactionHeaderType"/> <xsd:element name="consignmentId" type="prim:ID" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="getConsignmentDetailResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="consignment" type="freight:consignmentType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> </xsd:element> <wsdl:operation name="getConsignmentDetails"> <wsdl:input message="tns:getConsignmentDetailsRequest"/> <wsdl:output message="tns:getConsignmentDetailsResponse"/> <wsdl:fault name="fault1" message="tns:fault"/> </wsdl:operation> <wsdl:service name="ExternalOps"> <wsdl:port name="OperationsEndpoint1" binding="tns:OperationsEndpoint1Binding"> <soap:address location="https://services.startrackexpress.com.au:7560/DMZExternalService/InterfaceServices/ExternalOps.serviceagent/OperationsEndpoint1"/> </wsdl:port> </wsdl:service> And here in case it's relevant is the WSSoapClient class: <?PHP namespace startrackexpress\eservices; use SoapClient, SoapVar, SoapHeader; class WSSoapClient extends SoapClient { private $username; private $password; /*Generates a WS-Security header*/ private function wssecurity_header() { $timestamp = gmdate('Y-m-d\TH:i:s\Z'); $nonce = mt_rand(); $passdigest = base64_encode(pack('H*', sha1(pack('H*', $nonce).pack('a*', $timestamp).pack('a*', $this->password)))); $auth = ' <wsse:Security SOAP-ENV:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken> <wsse:Username>' . $this->username . '</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">' . $this->password . '</wsse:Password> <wsse:Nonce>' . base64_encode(pack('H*', $nonce)).'</wsse:Nonce> <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">' . $timestamp . '</wsu:Created> </wsse:UsernameToken> </wsse:Security> '; $authvalues = new SoapVar($auth, XSD_ANYXML); $header = new SoapHeader("http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "Security",$authvalues, true); return $header; } // Sets a username and passphrase public function __setUsernameToken($username,$password) { $this->username=$username; $this->password=$password; } // Overwrites the original method, adding the security header public function __soapCall($function_name, $arguments, $options=null, $input_headers=null, $output_headers=null) { try { $result = parent::__soapCall($function_name, $arguments, $options, $this->wssecurity_header()); return $result; } catch (exception $e) { throw new Exception("Exception in __soapCall, " . $e->getMessage(), "\n"); } } } ?> Update: The request XML would have been as follows: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://startrackexpress/Common/Primitives/v1" xmlns:ns2="http://startrackexpress/Common/actions/externals/Consignment/v1" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <SOAP-ENV:Header> <wsse:Security SOAP-ENV:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken> <wsse:Username>DFU00050</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Fabricate1</wsse:Password> <wsse:Nonce>M4FIeGA=</wsse:Nonce> <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2010-10-29T14:05:27Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </SOAP-ENV:Header> <SOAP-ENV:Body><ns2:getConsignmentDetailRequest> <ns2:header><ns1:source>customerA</ns1:source><ns1:accountNo>10072906</ns1:accountNo></ns2:header> <ns2:consignmentId>GKQ00000085</ns2:consignmentId> </ns2:getConsignmentDetailRequest></SOAP-ENV:Body> </SOAP-ENV:Envelope> This was obtained with the following code in WSSoapClient: public function __doRequest($request, $location, $action, $version) { echo "<p> " . htmlspecialchars($request) . " </p>" ; return parent::__doRequest($request, $location, $action, $version); }

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • How to configure Amazon Security Groups to achieve multi-tier architecture?

    - by ks78
    What is the preferred way to configure Amazon Security Groups to achieve a multi-tier architecture? Each of my instances has its own Security Group, which I only want to use for rules specific to an instance. I'd like to keep any rules which apply to multiple instances in a separate Security Group, which can then be assigned to instance Security Groups as necessary. As an example, I've setup a group called "admin", which allows administrative access from my IP. I added the "admin" group as the source to each of my instance security groups. However, I still can't access the instances from my IP without adding the rules directly to the instance's group. Am I missing something? Although it seems a multi-tier security architecture should be possible, it doesn't seem to be working.

    Read the article

  • How to configure Amazon Security Groups to achieve multi-tier architecture?

    - by ks78
    What is the preferred way to configure Amazon Security Groups to achieve a multi-tier architecture? Each of my instances has its own Security Group, which I only want to use for rules specific to an instance. I'd like to keep any rules which apply to multiple instances in a separate Security Group, which can then be assigned to instance Security Groups as necessary. As an example, I've setup a group called "admin", which allows administrative access from my IP. I added the "admin" group as the source to each of my instance security groups. However, I still can't access the instances from my IP without adding the rules directly to the instance's group. Am I missing something? Although it seems a multi-tier security architecture should be possible, it doesn't seem to be working.

    Read the article

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Trying to run WCF web service on non-domain VM, Security Errors

    - by NealWalters
    Am I in a Catch-22 situation here? My goal is to take a WCF service that I inherited, and run it on a VM and test it by calling it from my desktop PC. The VM is in a workgroup, and not in the company's domain. Basically, we need more test environments, ideally one per developer (we may have 2 to 4 people that need this). Thus the idea of the VM was that each developer could have his own web server that somewhat matches or real environment (where we actually have two websites, an external/exposed and internal). [Using VS2010 .NET 4.0] In the internal service, each method was decorated with this attribute: [OperationBehavior(Impersonation = ImpersonationOption.Required)] I'm still researching why this was needed. I think it's because a webapp calls the "internal" service, and either a) we need the credentials of the user, or b) we may doing some PrinciplePermission.Demands to see if the user is in a group. My interest is creating some ConsoleTest programs or UnitTest programs. I changed to allowed like this: [OperationBehavior(Impersonation = ImpersonationOption.Allowed)] because I was getting this error in trying to view the .svc in the browser: The contract operation 'EditAccountFamily' requires Windows identity for automatic impersonation. A Windows identity that represents the caller is not provided by binding ('WSHttpBinding','http://tempuri.org/') for contract ('IAdminService','http://tempuri.org/'. I don't get that error with the original bindings look like this: However, I believe I need to turn off this security since the web service is not on the domain. I tend to get these errors in the client: 1) The request for security token could not be satisfied because authentication failed - as an InnerException of "SecurityNegotiation was unhandled". or 2) The caller was not authenticated by the service as an InnerException of "SecurityNegotiation was unhandled". So can I create some configuration of code and web.config that will allow each developer to work on his own VM? Or must I join the VM to the domain? The number of permutations seems near endless. I've started to create a Word.doc that says what to do with each error, but now I'm in the catch-22 where I'm stuck. Thanks, Neal Server Bindings: <bindings> <wsHttpBinding> <binding name="wsHttpEndpointBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="500000000"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <!-- <security mode="None" /> This is one thing I tried --> <security> <message clientCredentialType="Windows" /> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="ABC.AdminService.AdminServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> </serviceCredentials> <!--<serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="AspNetWindowsTokenRoleProvider"/>--> <serviceAuthorization principalPermissionMode="UseWindowsGroups" impersonateCallerForAllOperations="true" /> </behavior> <behavior name="ABC.AdminService.IAdminServiceTransportBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerTrust" /> </clientCertificate> <serviceCertificate findValue="WCfServer" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> CLIENT: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IAdminService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://192.168.159.132/EC_AdminService/AdminService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IAdminService" contract="svcRef.IAdminService" name="WSHttpBinding_IAdminService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel>

    Read the article

  • WCF with No security

    - by james.ingham
    Hi all, I've got a WCF service setup which I can consume and use as intendid... but only on the same machine. I'm looking to get this working over multiple computers and I'm not fussed about the security. However when I set (client side) the security to = none, I get a InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. So I'm left with: <security mode="Message"> <message clientCredentialType="None" negotiateServiceCredential="false" algorithmSuite="Default" /> </security> But this gives me another InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. Why would I have to provide a certificate if security was turned off? Server app config: <system.serviceModel> <services> <service name="Server.WcfServiceLibrary.CheckoutService" behaviorConfiguration="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/CheckoutService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.ICheckoutService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> <service name="Server.WcfServiceLibrary.ManagementService" behaviorConfiguration="Server.WcfServiceLibrary.ManagementServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.IManagementService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> <serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="50" maxConcurrentInstances="50" /> </behavior> <behavior name="Server.WcfServiceLibrary.ManagementServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Client app config: <system.serviceModel> <bindings> <wsDualHttpBinding> <binding name="WSDualHttpBinding_IManagementService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:00:10" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" /> <security mode="Message"> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsDualHttpBinding> </bindings> <client> <endpoint address="http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" binding="wsDualHttpBinding" bindingConfiguration="WSDualHttpBinding_IManagementService" contract="ServiceReference.IManagementService" name="WSDualHttpBinding_IManagementService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> Thanks

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Invalid or expired security context token in WCF web service

    - by Damian
    All, I have a WCF web service (let's called service "B") hosted under IIS using a service account (VM, Windows 2003 SP2). The service exposes an endpoint that use WSHttpBinding with the default values except for maxReceivedMessageSize, maxBufferPoolSize, maxBufferSize and some of the time outs that have been increased. The web service has been load tested using Visual Studio Load Test framework with around 800 concurrent users and successfully passed all tests with no exceptions being thrown. The proxy in the unit test has been created from configuration. There is a sharepoint application that use the Office Sharepoint Server Search service to call web services "A" and "B". The application will get data from service "A" to create a request that will be sent to service "B". The response coming from service "B" is indexed for search. The proxy is created programmatically using the ChannelFactory. When service "A" takes less than 10 minutes, the calls to service "B" are successfull. But when service "A" takes more time (~20 minutes) the calls to service "B" throw the following exception: Exception Message: An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail Inner Exception Message: The message could not be processed. This is most likely because the action 'namespace/OperationName' is incorrect or because the message contains an invalid or expired security context token or because there is a mismatch between bindings. The security context token would be invalid if the service aborted the channel due to inactivity. To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding. The binding settings are the same, the time in both client server and web service server are synchronize with the Windows Time service, same time zone. When i look at the server where web service "B" is hosted i can see the following security errors being logged: Source: Security Category: Logon/Logoff Event ID: 537 User NT AUTHORITY\SYSTEM Logon Failure: Reason: An error occurred during logon Logon Type: 3 Logon Process: Kerberos Authentication Package: Kerberos Status code: 0xC000006D Substatus code: 0xC0000133 After reading some of the blogs online, the Status code means STATUS_LOGON_FAILURE and the substatus code means STATUS_TIME_DIFFERENCE_AT_DC. but i already checked both server and client clocks and they are syncronized. I also noticed that the security token seems to be cached somewhere in the client server because they have another process that calls the web service "B" using the same service account and successfully gets data the first time is called. Then they start the proccess to update the office sharepoint server search service indexes and it fails. Then if they called the first proccess again it will fail too. Has anyone experienced this type of problems or have any ideas? Regards, --Damian

    Read the article

  • Level-order in Haskell

    - by brain_damage
    I have a structure for a tree and I want to print the tree by levels. data Tree a = Nd a [Tree a] deriving Show type Nd = String tree = Nd "a" [Nd "b" [Nd "c" [], Nd "g" [Nd "h" [], Nd "i" [], Nd "j" [], Nd "k" []]], Nd "d" [Nd "f" []], Nd "e" [Nd "l" [Nd "n" [Nd "o" []]], Nd "m" []]] preorder (Nd x ts) = x : concatMap preorder ts postorder (Nd x ts) = (concatMap postorder ts) ++ [x] But how to do it by levels? "levels tree" should print ["a", "bde", "cgflm", "hijkn", "o"]. I think that "iterate" would be suitable function for the purpose, but I cannot come up with a solution how to use it. Would you help me, please?

    Read the article

  • XHTML validating block level element as a link

    - by Matty F
    I need a way to make an entire DL element clickable with only one anchor tag, that validates as XHTML. As in: <a> <dl> <dt>Data term</dt> <dd>Data definition</dd> </dl> </a> This currently doesn't validate as XHTML as the anchor tag cannot contain the DL. The only way I can get it to validate is if I make two anchor tags and place them inside DT and DD. As in: <dl> <dt><a>Data term</a></dt> <dd><a>Data definition</a></dt> </dl> I'm trying to avoid this, as it would result in two href attributes requiring maintenance, introducing the possibility they could become out of sync. Suggestions?

    Read the article

  • Configuring a Context specific Tomcat Security Realm

    - by Andy Mc
    I am trying to get a context specific security Realm in Tomcat 6.0, but when I start Tomcat I get the following error: 09-Dec-2010 16:12:40 org.apache.catalina.startup.ContextConfig validateSecurityRoles INFO: WARNING: Security role name myrole used in an <auth-constraint> without being defined in a <security-role> I have created the following context.xml file: <Context debug="0" reloadable="true"> <Resource name="MyUserDatabase" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/my-users.xml" /> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="MyUserDatabase"/> </Context> Created a file: my-users.xml which I have placed under WEB-INF/conf which contains the following: <tomcat-users> <role rolename="myrole"/> <user username="test" password="changeit" roles="myrole" /> </tomcat-users> Added the following lines to my web.xml file: <web-app ...> ... <security-constraint> <web-resource-collection> <web-resource-name>Entire Application</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>myrole</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> ... </web-app> But seem to get the error wherever I put conf/my-users.xml. Do I have to specify an explicit PATH in the pathname or is it relative to somewhere? Ideally I would like to have it packaged up as part of my WAR file. Any ideas?

    Read the article

  • Why not open a PDF file in the browser but first save it to the harddisk?

    - by Lernkurve
    Question Is it correct that saving a PDF to the harddisk first, and then opening it from there with some PDF reader (not the browser) is safer than opening it directly with the browser plugin? My current understanding I know that the PDF browser plugin might have a security leak and a manipulated PDF file might exploit it and get access to the user's computer. I recently heard that saving the PDF file frist and opening it then was safer. I don't understand why that should be safer. Can anyone explain? My logic would suggest that a manipulated file started from the harddisk can just as well exploit a security leak, say for instance, of Adobe Acrobat Reader.

    Read the article

  • Upgrading from 12.10 to 13.04 -> dpkg: error processing sudo (--configure)

    - by Korrigan Nagirrok
    Here's the deal and reason I'm asking for your help. Last night I went on upgrading my Xubuntu 12.10 installation to 13.04, so at tty1 I run the command sudo do-release-upgrade and everything seemed to went well except that after rebooting and when I run sudo apt-get update && sudo apt-get upgrade I get this error: sudo apt-get update && sudo apt-get upgrade Hit http://pt.archive.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release.gpg Hit http://dl.google.com stable Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release.gpg Hit http://pt.archive.ubuntu.com raring Release Hit http://archive.canonical.com raring Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release Hit http://extras.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release Hit http://dl.google.com stable Release Hit http://pt.archive.ubuntu.com raring/main Sources Hit http://pt.archive.ubuntu.com raring/restricted Sources Hit http://extras.ubuntu.com raring Release Hit http://archive.canonical.com raring Release Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring/universe Sources Hit http://pt.archive.ubuntu.com raring/multiverse Sources Hit http://dl.google.com stable/main i386 Packages Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B] Hit http://pt.archive.ubuntu.com raring/main i386 Packages Hit http://extras.ubuntu.com raring/main Sources Hit http://ppa.launchpad.net raring Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://pt.archive.ubuntu.com raring/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring/universe i386 Packages Hit http://extras.ubuntu.com raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse i386 Packages Hit http://ppa.launchpad.net raring Release Hit http://pt.archive.ubuntu.com raring/main Translation-en Hit http://ppa.launchpad.net raring/main Sources Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse Translation-en Hit http://pt.archive.ubuntu.com raring/restricted Translation-en Hit http://pt.archive.ubuntu.com raring/universe Translation-en Hit http://pt.archive.ubuntu.com raring-updates/main Sources Hit http://pt.archive.ubuntu.com raring-updates/restricted Sources Hit http://ppa.launchpad.net raring/main Sources Hit http://pt.archive.ubuntu.com raring-updates/universe Sources Hit http://pt.archive.ubuntu.com raring-updates/multiverse Sources Hit http://pt.archive.ubuntu.com raring-updates/main i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/multiverse i386 Packages Ign http://dl.google.com stable/main Translation-en_US Hit http://pt.archive.ubuntu.com raring-updates/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en_US Ign http://extras.ubuntu.com raring/main Translation-en_US Ign http://dl.google.com stable/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en Hit http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en Ign http://extras.ubuntu.com raring/main Translation-en Hit http://pt.archive.ubuntu.com raring-updates/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-updates/universe Translation-en Hit http://pt.archive.ubuntu.com raring-backports/main Sources Hit http://pt.archive.ubuntu.com raring-backports/restricted Sources Hit http://pt.archive.ubuntu.com raring-backports/universe Sources Hit http://pt.archive.ubuntu.com raring-backports/multiverse Sources Hit http://pt.archive.ubuntu.com raring-backports/main i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/multiverse i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/main Translation-en Hit http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en Get:2 http://security.ubuntu.com raring-security Release [40.8 kB] Hit http://pt.archive.ubuntu.com raring-backports/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-backports/universe Translation-en Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:3 http://security.ubuntu.com raring-security/main Sources [2,109 B] Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:4 http://security.ubuntu.com raring-security/restricted Sources [14 B] Get:5 http://security.ubuntu.com raring-security/universe Sources [14 B] Get:6 http://security.ubuntu.com raring-security/multiverse Sources [14 B] Get:7 http://security.ubuntu.com raring-security/main i386 Packages [3,670 B] Get:8 http://security.ubuntu.com raring-security/restricted i386 Packages [14 B] Get:9 http://security.ubuntu.com raring-security/universe i386 Packages [2,824 B] Get:10 http://security.ubuntu.com raring-security/multiverse i386 Packages [14 B] Ign http://pt.archive.ubuntu.com raring/main Translation-en_US Ign http://pt.archive.ubuntu.com raring/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en_US Hit http://security.ubuntu.com raring-security/main Translation-en Ign http://pt.archive.ubuntu.com raring-updates/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/restricted Translation-en_US Hit http://security.ubuntu.com raring-security/multiverse Translation-en Ign http://pt.archive.ubuntu.com raring-backports/universe Translation-en_US Hit http://security.ubuntu.com raring-security/restricted Translation-en Hit http://security.ubuntu.com raring-security/universe Translation-en Ign http://security.ubuntu.com raring-security/main Translation-en_US Ign http://security.ubuntu.com raring-security/multiverse Translation-en_US Ign http://security.ubuntu.com raring-security/restricted Translation-en_US Ign http://security.ubuntu.com raring-security/universe Translation-en_US Fetched 50.4 kB in 6s (7,454 B/s) Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything I thought logical, like sudo dpkg --configure -a dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: sudo ubuntu-minimal sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) Can someone help me, please. Edit: Here's some more info that could be of help for anyone. The output of apt-cache policy linux-image-generic-pae linux-generic-pae is linux-image-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages linux-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages

    Read the article

  • Suggestions for implementing a dynamic 2D level

    - by Wouter
    I am working on a game that needs a level that is completely generated. Currently my approach is to draw textures for the levels pixel by pixel during the game (in XNA with SpriteBatch). This is too intensive unfortunately. The game has frame drops even when I only draw 1 level texture each draw cycle. Here is an example of the current prototype. It is a simple sidescroller with the avatar swimming through a cave. The shape of this cave will alter throughout the level (textures and physics collision shapes). You can clearly see the boundaries of the level tiles in the screenshot below. These are generated just before they move into camera view. For inspiration I looked at PixelJunk Shooter 2. These levels are obviously not generated, but some of the levels have movement. How do you guys think they implemented it? My guess is that the level and other objects in the game are actually flat 3d models, but I am not sure..

    Read the article

  • Problems with Level Architect, Citrus Engine, Flash

    - by Idan
    I am using the Citrus Engine to make a Flash game, and the Level Architect doesn't work well for me. Firstly, when I first launch it and open my project and my level, nothing is shown, no assets and not anything I have previously done with my level. To fix it, I open another project. The other project works fine, meaning I can see the assets and the level. Then I go back to the actual project I am working on, and the problem is fixed, only it does not fix the second problem: I can't add my own assests. I follow the manual and add tags like this: [Property(value="0")] But it doesn't change a thing in the level architect window (even after I close and reopen it). Any ideas? Thanks! Here's the code of the class I want to be shown in the Level Architect: package { import com.citrusengine.objects.PhysicsObject; import com.citrusengine.objects.platformer.Sensor; import flash.utils.clearTimeout; import flash.utils.setTimeout; /** * @author Aymeric */ public class Teleporter extends Sensor { [Property(value="0")] public var endX:Number=0; [Property(value="0")] public var endY:Number=0; public var object:PhysicsObject; [Property(value="0")] public var time:Number = 0; public var needToTeleport:Boolean = false; protected var _teleporting:Boolean = false; private var _teleportTimeoutID:uint; public function Teleporter(name:String, params:Object = null) { super(name, params); } override public function destroy():void { clearTimeout(_teleportTimeoutID); super.destroy(); } override public function update(timeDelta:Number):void { super.update(timeDelta); if (needToTeleport) { _teleporting = true; _teleportTimeoutID = setTimeout(_teleport, time); needToTeleport = false; } _updateAnimation(); } protected function _teleport():void { _teleporting = false; object.x = endX; object.y = endY; clearTimeout(_teleportTimeoutID); } protected function _updateAnimation():void { if (_teleporting) { _animation = "teleport"; } else { _animation = "normal"; } } } }

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Steps to take when technical staff leave

    - by Tom O'Connor
    How do you handle the departure process when privileged or technical staff resign / get fired? Do you have a checklist of things to do to ensure the continuing operation / security of the company's infrastructure? I'm trying to come up with a nice canonical list of things that my colleagues should do when I leave (I resigned a week ago, so I've got a month to tidy up and GTFO). So far I've got: Escort them off the premises Delete their email Inbox (set all mail to forward to a catch-all) Delete their SSH keys on server(s) Delete their mysql user account(s) ... So, what's next. What have I forgotten to mention, or might be similarly useful? (endnote: Why is this off-topic? I'm a systems administrator, and this concerns continuing business security, this is definitely on-topic.)

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

  • OPN Diamond Level Criteria Update

    - by Cinzia Mascanzoni
    On June 1, 2013, the criteria for Oracle PartnerNetwork members to attain the prestigious Diamond level will change and all members at the Diamond level at that point will be required to meet the new criteria. This change underscores the requirement for these elite partners to engage across Oracle’s broad product portfolio. Refer to the Diamond Level Requirements on the OPN Portal here for more detail.

    Read the article

  • Change Logging Level for SOA 11g

    - by James Taylor
    I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. Login to Enterprise Manager, e.g. http://localhost:7001/em Expand the SOA folder and right-click the soa-infra(soa_server1) folder and select Logs – Log Configuration Navigate to the component you want to monitor and change the log level. It is possible to change at a parent level if required It is not recommended that you set the level to FINIEST at a parent level as it will generate a lot of logging. Make sure you apply the change to take affect. Simple as that.

    Read the article

  • Form Validation, Dependant Drop Downs, Data Level Security in OWS for DotNetNuke - 5 Videos

    In this tutorial we demonstrate some very advanced techniques for building a car parts application in Open Web Studio. Throughout the tutorial we cover form input, validation, how to use dependant drop down lists, populating checkbox lists and introduce a new concept of data level security. Data level security allows you to control which data a user can access within a module. The videos contain: Video 1 - How to Setup Form Validation Video 2 - Car Parts Application, Assigning Security Roles into a Global Session Variable Video 3 - How to Build the Categories Module with Data Level Security Video 4 - How to Build the SubCategories Module and Use SubQuery Video 5 - How to Build the Car Parts List Module Total Time Length: 44min 19secsDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How difficult is it to change from Embedded programming to a high level programming [on hold]

    - by anudeep shetty
    I have a background in Computer Science. I worked on Embedded programming on Linux file systems, after I finished my Bachelor's degree, for over a year. After that I pursued my masters where most of my course choices involved working on web, java and databases. Now I have an offer to work with a company that is offering a job to work on the OS level. The company is pretty good but I am feeling that my Masters has gone to waste. I wanted to know is it common that a Computer Science major works on low-level coding and is there a possibility that I can work in this company for some years and then move onto an opportunity where I can work on high-level coding? Also is working on low-level programming a safe choice in terms of job opportunities?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >