Search Results

Search found 6326 results on 254 pages for 'continuous operation'.

Page 177/254 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    This article is a continuation of my previous entry where I explained how OIF/IdP leverages OAM to authenticate users at runtime: OIF/IdP internally forwards the user to OAM and indicates which Authentication Scheme should be used to challenge the user if needed OAM determine if the user should be challenged (user already authenticated, session timed out or not, session authentication level equal or higher than the level of the authentication scheme specified by OIF/IdP…) After identifying the user, OAM internally forwards the user back to OIF/IdP OIF/IdP can resume its operation In this article, I will discuss how OIF/IdP can be configured to map Federation Authentication Methods to OAM Authentication Schemes: When processing an Authn Request, where the SP requests a specific Federation Authentication Method with which the user should be challenged When sending an Assertion, where OIF/IdP sets the Federation Authentication Method in the Assertion Enjoy the reading! Overview The various Federation protocols support mechanisms allowing the partners to exchange information on: How the user should be challenged, when the SP/RP makes a request How the user was challenged, when the IdP/OP issues an SSO response When a remote SP partner redirects the user to OIF/IdP for Federation SSO, the message might contain data requesting how the user should be challenged by the IdP: this is treated as the Requested Federation Authentication Method. OIF/IdP will need to map that Requested Federation Authentication Method to a local Authentication Scheme, and then invoke OAM for user authentication/challenge with the mapped Authentication Scheme. OAM would authenticate the user if necessary with the scheme specified by OIF/IdP. Similarly, when an IdP issues an SSO response, most of the time it will need to include an identifier representing how the user was challenged: this is treated as the Federation Authentication Method. When OIF/IdP issues an Assertion, it will evaluate the Authentication Scheme with which OAM identified the user: If the Authentication Scheme can be mapped to a Federation Authentication Method, then OIF/IdP will use the result of that mapping in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled If the Authentication Scheme cannot be mapped, then OIF/IdP will set the Federation Authentication Method as the Authentication Scheme name in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled Mappings In OIF/IdP, the mapping between Federation Authentication Methods and Authentication Schemes has the following rules: One Federation Authentication Method can be mapped to several Authentication Schemes In a Federation Authentication Method <-> Authentication Schemes mapping, a single Authentication Scheme is marked as the default scheme that will be used to authenticate a user, if the SP/RP partner requests the user to be authenticated via a specific Federation Authentication Method An Authentication Scheme can be mapped to a single Federation Authentication Method Let’s examine the following example and the various use cases, based on the SAML 2.0 protocol: Mappings defined as: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapped to LDAPScheme, marked as the default scheme used for authentication BasicScheme urn:oasis:names:tc:SAML:2.0:ac:classes:X509 mapped to X509Scheme, marked as the default scheme used for authentication Use cases: SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:X509 as the RequestedAuthnContext: OIF/IdP will authenticate the use with X509Scheme since it is the default scheme mapped for that method. SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the RequestedAuthnContext: OIF/IdP will authenticate the use with LDAPScheme since it is the default scheme mapped for that method, not the BasicScheme SP did not request any specific methods, and user was authenticated with BasisScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with LDAPScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with BasisSessionlessScheme: OIF/IdP will issue an Assertion with BasisSessionlessScheme as the FederationAuthenticationMethod, since that scheme could not be mapped to any Federation Authentication Method (in this case, the administrator would need to correct that and create a mapping) Configuration Mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. Authentication Schemes As discussed in the previous article, during Federation SSO, OIF/IdP will internally forward the user to OAM for authentication/verification and specify which Authentication Scheme to use. OAM will determine if a user needs to be challenged: If the user is not authenticated yet If the user is authenticated but the session timed out If the user is authenticated, but the authentication scheme level of the original authentication is lower than the level of the authentication scheme requested by OIF/IdP So even though an SP requests a specific Federation Authentication Method to be used to challenge the user, if that method is mapped to an Authentication Scheme and that at runtime OAM deems that the user does not need to be challenged with that scheme (because the user is already authenticated, session did not time out, and the session authn level is equal or higher than the one for the specified Authentication Scheme), the flow won’t result in a challenge operation. Protocols SAML 2.0 The SAML 2.0 specifications define the following Federation Authentication Methods for SAML 2.0 flows: urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocol urn:oasis:names:tc:SAML:2.0:ac:classes:Telephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:PersonalTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:Smartcard urn:oasis:names:tc:SAML:2.0:ac:classes:Password urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocolPassword urn:oasis:names:tc:SAML:2.0:ac:classes:X509 urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient urn:oasis:names:tc:SAML:2.0:ac:classes:PGP urn:oasis:names:tc:SAML:2.0:ac:classes:SPKI urn:oasis:names:tc:SAML:2.0:ac:classes:XMLDSig urn:oasis:names:tc:SAML:2.0:ac:classes:SoftwarePKI urn:oasis:names:tc:SAML:2.0:ac:classes:Kerberos urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport urn:oasis:names:tc:SAML:2.0:ac:classes:SecureRemotePassword urn:oasis:names:tc:SAML:2.0:ac:classes:NomadTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:AuthenticatedTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:SmartcardPKI urn:oasis:names:tc:SAML:2.0:ac:classes:TimeSyncToken Out of the box, OIF/IdP has the following mappings for the SAML 2.0 protocol: Only urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml20-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 2.0 An example of an AuthnRequest message sent by an SP to an IdP with the SP requesting a specific Federation Authentication Method to be used to challenge the user would be: <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="https://idp.com/oamfed/idp/samlv20" ID="id-8bWn-A9o4aoMl3Nhx1DuPOOjawc-" IssueInstant="2014-03-21T20:51:11Z" Version="2.0">  <saml:Issuer ...>https://acme.com/sp</saml:Issuer>  <samlp:NameIDPolicy AllowCreate="false" Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"/>  <samlp:RequestedAuthnContext Comparison="minimum">    <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">      urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport </saml:AuthnContextClassRef>  </samlp:RequestedAuthnContext></samlp:AuthnRequest> An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                    urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> An administrator would be able to specify a mapping between a SAML 2.0 Federation Authentication Method and one or more OAM Authentication Schemes SAML 1.1 The SAML 1.1 specifications define the following Federation Authentication Methods for SAML 1.1 flows: urn:oasis:names:tc:SAML:1.0:am:unspecified urn:oasis:names:tc:SAML:1.0:am:HardwareToken urn:oasis:names:tc:SAML:1.0:am:password urn:oasis:names:tc:SAML:1.0:am:X509-PKI urn:ietf:rfc:2246 urn:oasis:names:tc:SAML:1.0:am:PGP urn:oasis:names:tc:SAML:1.0:am:SPKI urn:ietf:rfc:3075 urn:oasis:names:tc:SAML:1.0:am:XKMS urn:ietf:rfc:1510 urn:ietf:rfc:2945 Out of the box, OIF/IdP has the following mappings for the SAML 1.1 protocol: Only urn:oasis:names:tc:SAML:1.0:am:password is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml11-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 1.1 An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameID ...>[email protected]</saml:NameID>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Note: SAML 1.1 does not define an AuthnRequest message. An administrator would be able to specify a mapping between a SAML 1.1 Federation Authentication Method and one or more OAM Authentication Schemes OpenID 2.0 The OpenID 2.0 PAPE specifications define the following Federation Authentication Methods for OpenID 2.0 flows: http://schemas.openid.net/pape/policies/2007/06/phishing-resistant http://schemas.openid.net/pape/policies/2007/06/multi-factor http://schemas.openid.net/pape/policies/2007/06/multi-factor-physical Out of the box, OIF/IdP does not define any mappings for the OpenID 2.0 Federation Authentication Methods. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. An example of an OpenID 2.0 Request message sent by an SP/RP to an IdP/OP would be: https://idp.com/openid?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.realm=https%3A%2F%2Facme.com%2Fopenid&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_request&openid.ax.type.attr0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.if_available=attr0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0 An example of an Open ID 2.0 SSO Response issued by an IdP/OP would be: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will provide examples on how to configure OIF/IdP for the various protocols, to map OAM Authentication Schemes to Federation Authentication Methods.Cheers,Damien Carru

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET 4 Hosting :: How to Debug Your ASP.NET Applications

    - by mbridge
    Remote debugging of a process is a privilege, and like all privileges, it must be granted to a user or group of users before its operation is allowed. The Microsoft .NET Framework and Microsoft Visual Studio .NET provide two mechanisms to enable remote debugging support: The Debugger Users group and the "Debug programs" user right. Debugger Users Group When you debug a remote .NET Framework-based application, the Debugger on your computer must communicate with the remote computer using DCOM. The remote server must grant the Debugger access, and it does this by granting access to all members of the Debugger Users group. Therefore, you must ensure that you are a member of the Debugger Users group on that computer. This is a local security group, meaning that it is visible to only the computer where it exists. To add yourself or a group to the Debugger Users group, follow these steps: 1. Right-click the My Computer icon on the Desktop and choose Manage from the context menu. 2. Browse to the Groups node, which is found under the Local Users and Groups node of System Tools. 3. In the right pane, double-click the Debugger Users group. 4. Add your user account or a group account of which you are a member. Debug Programs User Right To debug programs that run under an account that is different from your account, you must be granted the "Debug programs" user right on the computer where the program runs. By default, only the Administrators group is granted this user right. You can check this by opening Local Security Policy on the computer. To do so, follow these steps: 1. Click Start, Administrative Tools, and then Local Security Policy. 2. Browse to the User Rights Assignment node under the Local Policies node. 3. In the right pane, double-click the "Debug programs" user right. 4. Add your user account or a group account of which you are a member.

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • How do I resolve not fully installed package (python3-setuptools)?

    - by user3737693
    I was trying to install python3-setuptools, and when i run $ sudo apt-get install python3-setuptools I get this error: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up python3-setuptools (0.6.34-0ubuntu1) ... Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) I tried apt-get clean, apt-get autoclean, apt-get remove python3-setuptools, dpkg --remove python3-setuptools, apt-get install -f, dpkg -P --force-remove-reinstreq, dpkg -P --force-all --force-remove-reinstreq and dpkg --purge, but none of them worked. Output of sudo dpkg -P --force-all --force-remove-reinstreq python3-setuptools (Reading database ... 225309 files and directories currently installed.) Removing python3-setuptools ... Traceback (most recent call last): File "/usr/bin/py3clean", line 32, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--purge): subprocess installed pre-removal script returned error exit status 1 Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • The Year 2010, The Year of Change

    As I look back on the year of 2010, I could have never predicted the wonderful changes that have occurred for my wife and me. The beginning of this year started out as the 9th year that we lived in South Florida, and my fourth year working for DentalPlans.com as a software engineer/network admin. About 3 months in to the year I was given an excellent opportunity to work for MovieTickets.com in the software engineering department. This opportunity allowed me to gain experience with jQuery due to one of my projects was to reengineering MovieTickets.com existing Marketing Panel System. About 3 months after working at MovieTickets.com, my wife and I were offered an opportunity of a life time. I was offered a Job in a large background\information security company located in Nashville, TN as software engineer II.  I must note that after living in South Florida for 9 years, my wife and I really had a strong distaste for the South Florida life style and the general attitude/culture of the area. Even though we shared a strong dislike for the area in which we lived I must admit that it was a tough decision to leave MovieTickets.com because I was really doing well and I made some great new friends like Chris Catto, and Tyson Nero.  In fact, they introduced me to Local Microsoft User Groups, and software development podcast like DotNetRocks.com and Hanselminutes.com.  In addition, we also went to my first Microsoft launch down in Miami for Visual Studios 2010. I must admit it was a cool experience.  I truly hope to keep in touch with them to see how their careers grow, and I know they will. I must admit I was nervous and excited to start the next chapter in our live as I started up the 26 foot U-Haul truck and got on the road for Nashville from Boca Raton. I knew that the change was going to lead to new adventures and new opportunities that I could never imagine.  As we pulled in to the long driveway of our rental house, we knew that this was the right place for my wife and I. Natalie, my wife had actually come up to Nashville and within one week of my job offer had set up a nice rental home for us to restart our lives in TN.  I must admit that the wonderful southern hospitality took a bit to get use to due to the type of people we were used to dealing with on a regular basis. Our first 2 months seemed like we were living a dream because of our new area and the wonderful people we live around. So far my new job is going really well and I really like the people on my team and department. In fact after 6 months I am now in charge of all application builds for our new deployment process. I am also leading up a push for setting up of continuous integration within our new build process.  In addition to starting my new job, I was also offered a position as an adjust instructor at ITT Tech teaching course like VB.net, Java Script, Ajax, and database development. So far I have really like teaching at the college level.  Information technology has really been great for my life so I am really glad to be able to give back. That is actually why I started DotNetBlocks. This site allows me to document things I have learned as I work with technology, and allows others to borrow from my experiences.  I hope that this site can help others as others have helped me get where I am. Finally, I am glade to report that I only have 4 classes left for my master’s degree at Capella University. I am proud to announce that I am still on track to graduate with 3.91 GPA.  This last class was really a test because I had a crazy idea that I could work full time as a software engineer, teach two college courses as a first time teacher and also take an advanced masters class in application architecture. I have no idea how I actually survived, but I am really surprised how well I actually did. I was invited back to reach again at ITT Tech, and I passed my masters class with an “A”.  I have decided to take this next term off from my master’s program so that I do not get burned out.  Also, so that my new current employer will pay for more of my education, tuition reimbursement is an awesome benefit. This was my year 2010, how was yours?

    Read the article

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Open the SQL Server Error Log with PowerShell

    - by BuckWoody
    Using the Server Management Objects (SMO) library, you don’t even need to have the SQL Server 2008 PowerShell Provider to read the SQL Server Error Logs – in fact, you can use regular old everyday PowerShell. Keep in mind you will need the SMO libraries – which can be installed separately or by installing the Client Tools from the SQL Server install media. You could search for errors, store a result as a variable, or act on the returned values in some other way. Replace the Machine Name with your server and Instance Name with your instance, but leave the quotes, to make this work on your system: [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") $machineName = "UNIVAC" $instanceName = "Production" $sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "$machineName\$instanceName" $sqlServer.ReadErrorLog() Want to search for something specific, like the word “Error”? Replace the last line with this: $sqlServer.ReadErrorLog() | where {$_.Text -like "Error*"} Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Python PyBluez loses Bluetooth connection after a while

    - by Travis G.
    I am using Python to write a simple serial Bluetooth script that sends information about my computer stats periodically. The receiving device is a Sparkfun BlueSmirf Silver. The problem is that, after the script runs for a few minutes, it stops sending packets to the receiver and fails with the error: (11, 'Resource temporarily unavailable') Noticing that this inevitably happens, I added some code to automatically try to reopen the connection. However, then I get: Could not connect: (16, 'Device or resource busy') Am I doing something wrong with the connection? Do I need to occasionally reopen the socket? I'm not sure how to recover from this type of error. I understand that sometimes the port will be busy and a write operation is deferred to avoid blocking other processes, but I wouldn't expect the connection to fail so regularly. Any thoughts? Here is the script: import psutil import serial import string import time import bluetooth sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' #gauges = serial.Serial() #gauges.port = '/dev/rfcomm0' #gauges.baudrate = 9600 #gauges.parity = 'N' #gauges.writeTimeout = 0 #gauges.open() filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) def connect(): while(True): try: gaugeSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM) gaugeSocket.connect(('00:06:66:42:22:96', 1)) break; except bluetooth.btcommon.BluetoothError as error: print "Could not connect: ", error, "; Retrying in 5s..." time.sleep(5) return gaugeSocket; gaugeSocket = connect() while(1): usage = psutil.cpu_percent(interval=sampleTime) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) try: #gauges.write(USAGE_CHAR) gaugeSocket.send(USAGE_CHAR) #gauges.write(chr(int(usage))) #write the first byte gaugeSocket.send(chr(int(usage))) #print("Wrote usage: " + str(int(usage))) #gauges.write(TEMP_CHAR) gaugeSocket.send(TEMP_CHAR) #gauges.write(chr(temp)) gaugeSocket.send(chr(temp)) #print("Wrote temp: " + str(temp)) except bluetooth.btcommon.BluetoothError as error: print "Caught BluetoothError: ", error time.sleep(5) gaugeSocket = connect() pass gaugeSocket.close() EDIT: I should add that this code connects fine after I power-cycle the receiver and start the script. However, it fails after the first exception until I restart the receiver. P.S. This is related to my recent question, Why is /dev/rfcomm0 giving PySerial problems?, but that was more about PySerial specifically with rfcomm0. Here I am asking about general rfcomm etiquette.

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo;

    Read the article

  • Silverlight Cream for May 05, 2010 -- #856

    - by Dave Campbell
    In this Issue: Jeremy Alles(-2-), Kunal Chowdhury, anand iyer, Yochay Kiriaty(-2-, -3-), Max Paulousky, David Kelley, smartyP, Tim Heuer, and Dan Wahlin. Shoutout: Tim Heuer provides links for all the Ways to give feedback on Silverlight From SilverlightCream.com: [WP7] Bug when using NavigationService in Windows Phone 7 Jeremy Alles has blogged about a bug he found using the Navigation service in WP7. He gives the steps to reproduce and a couple possible workarounds. [WP7] Using the camera in the emulator Jeremy Alles is also digging into the camera functionality in the emulator. He has code demonstrating launching a camera task, and a list of other tasks available. Silverlight Tutorials Chapter 3: Introduction to Panels Kunal Chowdhury has Chapter 3 of his Silverlight 4 Tutorial series up and he's talking about Panels this time out. Push Notifications in Windows Phone 7 developer tools CTP April Refresh anand iyer is discussing the Push Notifications, only from a code perspective. Good information and good additional links to follow. Windows Phone Application Life Cycle Yochay Kiriaty talks with Tudor Toma and Jaime Rodriguez about the WP7 application lifecycle on Channel 9. Understanding Microsoft Push Notifications for Windows Phones Yochay Kiriaty has a 2-part post up on WP7 Push Notifications. The first part is explaining what Push Notifications are and why we need them... as a developer and as an end user viewing Toast or Tile notifications. Understanding How Microsoft Push Notification Works – Part 2 In the 2nd part of his Push Notification series, Yochay Kiriaty discusses how the Push Notification works under the covers. To Remember: Deployment of Silverlight Applications With Wcf Ria Services Max Paulousky has a post up for reference on what to look into when you get "Load Operation Failed" in WCF RIA services. Launching a URL from an OOB Silverlight Application David Kelley has a quick post up on launching URLs from an OOB app. If you haven't tried it, you may be surprised as he was at first. Creating a Windows Phone 7 XNA Game in Landscape Orientation smartyP is looking at recreating a landscape WP7 game in XNA and is detailing some of the issues he's been dealing with, and is also sharing a project file. New Silverlight 4 Themes available–get the raw bits Tim Heuer provided 'raw' versions of 3 new themes. Read his post to see exactly what he means by 'raw' ... they're definitely good looking, and are going to get a lot of play. Handling WCF Service Paths in Silverlight 4 – Relative Path Support Dan Wahlin shares his technique for avoiding the pain involved with ServiceReferences.ClientConfig by using Silverlight 4 relative path support. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • New Themes New Benefits (WinForms)

    We believe that working hard on something can be great fun at the end when everything is done and the seeds have resulted in the sweetest fruits. This is the case with the new Theming Mechanism and the new Visual Style Builder which we introduced as of Q1 2010.   I am not going to dive into any details on the new concepts behind all this stuff, but will simply focus on the numbers: both in terms of loading speed and memory usage. As you may already know, the new approach we use to style our controls uses the so called Style Repository which stores style settings that can be reused throughout the whole theme. As a result, we have estimated that the size of our themes has been significantly reduced. For instance, the size of all XML files of the Desert theme sums up to 1.83 MB. The case with the new version of the Desert theme is drastically different. Despite the fact that the new theme consists of more XML files compared to the old, its size is only 707 KB!   Furthermore, we have performed a simple performance test since the common sense tells us that such a great improvement in terms of memory footprint should be followed by a great improvement in terms of speed. We have estimated that loading and applying the new Desert theme to a form containing all RadControls for WinForms takes roughly 30% less time compared to the same operation with the old version of the Desert theme. The following screenshots briefly demonstrate the scenario which we used to estimate the loading time difference between the old and the new Desert theme:     Here, the old Desert theme is applied to all controls on the Form which takes almost 1,3 seconds.     Applying the new Desert theme (based on the new Theming Mechanism) takes about 0,78 seconds.   On top of all these great improvements, we can add the fact that the new Visual Style Builder significantly reduces the time needed to style a control by entirely changing the approach compared to the old version of this tool. You can be sure that we have already prepared some great new stuff for Q1 2010 SP1 that will simplify things further so that designing themes with the new VSB will become more fun than ever!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Themes New Benefits (WinForms)

    We believe that working hard on something can be great fun at the end when everything is done and the seeds have resulted in the sweetest fruits. This is the case with the new Theming Mechanism and the new Visual Style Builder which we introduced as of Q1 2010.   I am not going to dive into any details on the new concepts behind all this stuff, but will simply focus on the numbers: both in terms of loading speed and memory usage. As you may already know, the new approach we use to style our controls uses the so called Style Repository which stores style settings that can be reused throughout the whole theme. As a result, we have estimated that the size of our themes has been significantly reduced. For instance, the size of all XML files of the Desert theme sums up to 1.83 MB. The case with the new version of the Desert theme is drastically different. Despite the fact that the new theme consists of more XML files compared to the old, its size is only 707 KB!   Furthermore, we have performed a simple performance test since the common sense tells us that such a great improvement in terms of memory footprint should be followed by a great improvement in terms of speed. We have estimated that loading and applying the new Desert theme to a form containing all RadControls for WinForms takes roughly 30% less time compared to the same operation with the old version of the Desert theme. The following screenshots briefly demonstrate the scenario which we used to estimate the loading time difference between the old and the new Desert theme:     Here, the old Desert theme is applied to all controls on the Form which takes almost 1,3 seconds.     Applying the new Desert theme (based on the new Theming Mechanism) takes about 0,78 seconds.   On top of all these great improvements, we can add the fact that the new Visual Style Builder significantly reduces the time needed to style a control by entirely changing the approach compared to the old version of this tool. You can be sure that we have already prepared some great new stuff for Q1 2010 SP1 that will simplify things further so that designing themes with the new VSB will become more fun than ever!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • apt-get fails to upgrade, install, remove etc

    - by Kieran Peat
    I upgraded from 11.10 to 12.04, had no issues that I noticed. Recently tried to install something via software center, but it was throwing errors. Changed to trying to sudo apt-get install instead but again no luck. I've genuinely tried as much as I know to fix this, but I can't so I figured I'd ask here. I've done sudo apt-get update successfully but sudo apt-get upgrade failed with... You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is installed E: Unmet dependencies. Try using -f. Using sudo apt-get -f install... The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. 20 not fully installed or removed. Need to get 0 B/3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libssl1.0.0 I've tried sudo apt-get remove libssl1.0.0 and sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I've also tried sudo apt-get dist-upgrade, sudo apt-get autoremove etc without any luck. I also tried to download the .deb and use dpkg -i, but that failed and did not fully understand the method to be honest. Edit This is in response to the comments ref: sudo apt-get install -f doesn't fix broken packages. And now? sudo dpkg --configure -a --force-all dpkg: error processing libssl1.0.0 (--configure): libssl1.0.0:amd64 1.0.1-4ubuntu5.2 cannot be configured because libssl1.0.0:i386 is in a different version (1.0.0e-2ubuntu4.6) dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: too many errors, stopping Errors were encountered while processing: libssl1.0.0 libssl1.0.0:i386 ... libssl1.0.0:i386 Processing was halted because there were too many errors. Ref: Package manager doesn't work anymore moving /var/lib/kpkg/info/libssl.. kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:i386.postinst /var/lib/dpkg/info/libssl1.0.0:i386.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:amd64.postinst /var/lib/dpkg/info/libssl1.0.0:amd64.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl Reading package lists... Done Building dependency tree Reading state information... Done Package libssl is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libssl' has no installation candidate kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl1.0.0 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is to be installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). kieran@kieran-EX58-UD3R:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 58 not upgraded. 20 not fully installed or removed. Need to get 3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libssl1.0.0 i386 1.0.1-4ubuntu5.2 [1,002 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libqtcore4 i386 4:4.8.1-0ubuntu4.1 [2,061 kB] Fetched 3,063 kB in 4s (731 kB/s) E: Internal Error, No file name for libssl1.0.0 ref: libssl Dependencies removing libssl1.0.0:i386 kieran@kieran-EX58-UD3R:~$ sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • Replication - between pools in the same system

    - by Steve Tunstall
    OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down. Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migrationShadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however. So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam. Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now. Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.  Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.  Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.  Step 5. The status bar will also log an event when it's done. Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event. Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2. Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.   Happy Halloween,Steve 

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >