Search Results

Search found 7007 results on 281 pages for 'nested lists'.

Page 253/281 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Why can't I change the AU_AU locale to en_US?

    - by sachin
    /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ sudo update_locale LANG=en_US sudo: update_locale: command not found ganesha@ubuntu:~$ sudo update-locale LANG=en_US perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ gedit ~/.profile (process:6467): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo apt-get install --reinstall locales Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/3359 kB of archives. After this operation, 0 B of additional disk space will be used. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_TIME = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_PAPER = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). (Reading database ... 157848 files and directories currently installed.) Preparing to replace locales 2.13+git20120306-3 (using .../locales_2.13+git20120306-3_all.deb) ... Unpacking replacement locales ... Processing triggers for man-db ... Setting up locales (2.13+git20120306-3) ... /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_AG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_AU.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_BW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_CA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_DK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_GB.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_HK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IE.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IN.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NZ.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_PH.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_SG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZM.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ How to correct this?

    Read the article

  • Look Inside WebLogic Server Embedded LDAP with an LDAP Explorer

    - by james.bayer
    Today a question came up on our internal WebLogic Server mailing lists about an issue deleting a Group from WebLogic Server.  The group had a special character in the name. The WLS console refused to delete the group with the message a java.net.MalformedURLException and another message saying “Errors must be corrected before proceeding.” as shown below. The group aa:bb is the one with the issue.  Click to enlarge. WebLogic Server includes an embedded LDAP server that can be used for managing users and groups for “reasonably small environments (10,000 or fewer users)”.  For organizations scaling larger or using more high-end features, I recommend looking at one of Oracle’s very popular enterprise directory services products like Oracle Internet Directory or Oracle Directory Server Enterprise Edition.  You can configure multiple authenicators in WebLogic Server so that you can use multiple directories at the same time. I am not sure WebLogic Server supports special characters in group names for the Embedded LDAP server, but in this case both the console and WLST reported the same issue deleting the group with the special character in the name.  Here’s the WLST output: wls:/hotspot_domain/serverConfig/SecurityConfiguration/hotspot_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator> cmo.removeGroup('aa:bb') Traceback (innermost last): File "<console>", line 1, in ? weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090296]invalid URL ldap:///ou=people,ou=myrealm,dc=hotspot_domain??sub?(&(objectclass=person)(wlsMemberOf=cn=aa:bb,ou=groups,ou=myrealm,dc=hotspot_domain)) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:254) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.<init>(LDAPAtnGroupMembersNameList.java:119) at weblogic.security.providers.authentication.LDAPAtnDelegate.listGroupMembers(LDAPAtnDelegate.java:1392) at weblogic.security.providers.authentication.LDAPAtnDelegate.removeGroup(LDAPAtnDelegate.java:1989) at weblogic.security.providers.authentication.DefaultAuthenticatorImpl.removeGroup(DefaultAuthenticatorImpl.java:242) at weblogic.security.providers.authentication.DefaultAuthenticatorMBeanImpl.removeGroup(DefaultAuthenticatorMBeanImpl.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.invoke(JMXContextInterceptor.java:263) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:444) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:323) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11$1.run(JMXConnectorSubjectForwarder.java:663) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11.run(JMXConnectorSubjectForwarder.java:661) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.invoke(JMXConnectorSubjectForwarder.java:654) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1367) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207) at weblogic.work.ExecuteThread.run(ExecuteThread.java:176) Caused by: java.net.MalformedURLException at netscape.ldap.LDAPUrl.readNextConstruct(LDAPUrl.java:651) at netscape.ldap.LDAPUrl.parseUrl(LDAPUrl.java:277) at netscape.ldap.LDAPUrl.<init>(LDAPUrl.java:114) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:224) ... 41 more It’s fairly clear that in order to work that the : character needs to be URL encoded to %3A or similar.  But all is not lost, there is another way.  You can configure an LDAP Explorer like JXplorer to WebLogic Server Embedded LDAP and browse/edit the entries. Follow the instructions here, being sure to change the authentication credentials to the Embedded LDAP server to some value you know, as by default they are some unknown value.  You’ll need to reboot the WebLogic Server Admin Server after making this change. Now configure JXplorer to connect as described in the documentation.  I’ve circled the important inputs.  In this example, my domain name is “hotspot_domain” which listens on the localhost listen address and port 7001.  The cn=Admin user name is a constant identifier for the Administrator of the embedded LDAP and that does not change, but you need to know what it is so you can enter it into the tool you use. Once you connect successfully, you can explore the entries and in this case delete the group that is no longer desired.

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Can't install wine (or ia32-libs) in Ubuntu 12.10 64 bit

    - by carestad
    As already pointed out here, people seems to have issues with installing wine in the latest version of Ubuntu. I'm suspecting this only happens with 64 bit users. For example, when trying to install wine, wine1.4, wine1.4:i386, wine1.5, wine1.5:i386, ia32-libs or ia32-libs:i386 with apt-get, I get a lot of dependency errors. Doing a sudo apt-get -f install doesn't seem to do the trick, neither does using aptitude. The errors I get is normally that the packages depend on some :i386 package, but installing those manually doesn't work either because they also have dependency issues (isn't APT supposed to do this automatically?!). I also downloaded CrossOver today and tried installing the .deb manually, but the dependency issues show up there as well. When running sudo apt-get -f install after trying to install the CrossOver .deb, apt-get wants to purge the following packages: ia32-crossover intel-gpu-tools libdrm-nouveau2 libgl1-mesa-dri libva-x11-1 ubuntu-desktop vlc xorg xserver-xorg-video-ati xserver-xorg-video-intel xserver-xorg-video-modesetting xserver-xorg-video-openchrome xserver-xorg-video-radeon xserver-xorg-video-vmware What I've tried so far (and didn't work): Installing synaptic, reloading my repositories, searching for ia32 and installing ia32-libs. Using Ubuntu Software Center to install Wine and ia32-libs. Using apt-get and aptitude to install all the differend varieties of the wine packages, both with and without the :i386 and -amd64 suffixes in package names. Disabling the universe and multiverse repos, run a sudo apt-get update and then re-enable them again. Boot a newly downloaded Ubuntu 12.10 x64 live USB and try to install all the different packages there. What I haven't tried (yet): Boot a newly downloaded Ubuntu 12.10 x32 image and try to install wine there (I'm just guessing that will work). Reinstall Ubuntu. Throw my computer out a window. wine alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. wine-1.4 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4 (...) The following packages have unmet dependencies: wine1.4 : Depends: wine1.4-i386 (= 1.4.1-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.4:i386 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.4:i386 : Depends: wine1.4-i386:i386 (= 1.4.1-0ubuntu1) Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. wine-1.5 alexander@cosmo:~$ sudo apt-get install wine1.5 (...) The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.5:i386 alexander@cosmo:~$ sudo apt-get install wine1.5:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.5:i386 : Depends: wine1.5-i386:i386 (= 1.5.16-0ubuntu1) but it is not going to be installed Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ia32-libs alexander@cosmo:~$ sudo apt-get install ia32-libs (...) The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. ia32-libs:i386 alexander@cosmo:~$ sudo apt-get install ia32-libs:i386 (...) Package ia32-libs:i386 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 lib32asound2 E: Package 'ia32-libs:i386' has no installation candidate

    Read the article

  • ca-certificates-java fails when trying to install openjdk-6-jre

    - by Jonas
    I use a VPS with Ubuntu Server 10.10 x64. I want to use Java and run the command sudo apt-get install openjdk-6-jre but it fails because the installation encounted errors while processing ca-certificates-java. I have tried to install the failed package with: sudo apt-get install ca-certificates-java How can I solve this? I have run sudo apt-get update and sudo apt-get upgrade but I get the same errors after that. I have also installed Ubuntu Server x64 on a VirtualBox, but the two Ubuntu Server 10.10 has different kernel versions (2.6.35 on VirtualBox and 2.6.18 on my VPS). And on VirtualBox I can install Jetty without any problems. The VPS is a fresh install of Ubuntu Server 10.10 x64, the first command I was running was sudo apt-get install openjdk-6-jre. When I run sudo apt-get install ca-certificates-java I get this message: Reading package lists... Done Building dependency tree Reading state information... Done ca-certificates-java is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? Here I press Y then I get this message: Setting up ca-certificates-java (20100412) ... creating /etc/ssl/certs/java/cacerts... error adding brasil.gov.br/brasil.gov.br.crt error adding cacert.org/cacert.org.crt error adding debconf.org/ca.crt error adding gouv.fr/cert_igca_dsa.crt error adding gouv.fr/cert_igca_rsa.crt error adding mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt error adding mozilla/AddTrust_External_Root.crt error adding mozilla/AddTrust_Low-Value_Services_Root.crt error adding mozilla/AddTrust_Public_Services_Root.crt error adding mozilla/AddTrust_Qualified_Certificates_Root.crt error adding mozilla/America_Online_Root_Certification_Authority_1.crt error adding mozilla/America_Online_Root_Certification_Authority_2.crt error adding mozilla/Baltimore_CyberTrust_Root.crt error adding mozilla/COMODO_Certification_Authority.crt error adding mozilla/COMODO_ECC_Certification_Authority.crt error adding mozilla/Camerfirma_Chambers_of_Commerce_Root.crt error adding mozilla/Camerfirma_Global_Chambersign_Root.crt error adding mozilla/Certplus_Class_2_Primary_CA.crt error adding mozilla/Certum_Root_CA.crt error adding mozilla/Comodo_AAA_Services_root.crt error adding mozilla/Comodo_Secure_Services_root.crt error adding mozilla/Comodo_Trusted_Services_root.crt error adding mozilla/DST_ACES_CA_X6.crt error adding mozilla/DST_Root_CA_X3.crt error adding mozilla/DigiCert_Assured_ID_Root_CA.crt error adding mozilla/DigiCert_Global_Root_CA.crt error adding mozilla/DigiCert_High_Assurance_EV_Root_CA.crt error adding mozilla/DigiNotar_Root_CA.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt error adding mozilla/Entrust.net_Global_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Global_Secure_Server_CA.crt error adding mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt error adding mozilla/Entrust.net_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Secure_Server_CA.crt error adding mozilla/Entrust_Root_Certification_Authority.crt error adding mozilla/Equifax_Secure_CA.crt error adding mozilla/Equifax_Secure_Global_eBusiness_CA.crt error adding mozilla/Equifax_Secure_eBusiness_CA_1.crt error adding mozilla/Equifax_Secure_eBusiness_CA_2.crt error adding mozilla/Firmaprofesional_Root_CA.crt error adding mozilla/GTE_CyberTrust_Global_Root.crt error adding mozilla/GTE_CyberTrust_Root_CA.crt error adding mozilla/GeoTrust_Global_CA.crt error adding mozilla/GeoTrust_Global_CA_2.crt error adding mozilla/GeoTrust_Primary_Certification_Authority.crt error adding mozilla/GeoTrust_Universal_CA.crt error adding mozilla/GeoTrust_Universal_CA_2.crt error adding mozilla/GlobalSign_Root_CA.crt error adding mozilla/GlobalSign_Root_CA_-_R2.crt error adding mozilla/Go_Daddy_Class_2_CA.crt error adding mozilla/IPS_CLASE1_root.crt error adding mozilla/IPS_CLASE3_root.crt error adding mozilla/IPS_CLASEA1_root.crt error adding mozilla/IPS_CLASEA3_root.crt error adding mozilla/IPS_Chained_CAs_root.crt error adding mozilla/IPS_Servidores_root.crt error adding mozilla/IPS_Timestamping_root.crt error adding mozilla/NetLock_Business_=Class_B=_Root.crt error adding mozilla/NetLock_Express_=Class_C=_Root.crt error adding mozilla/NetLock_Notary_=Class_A=_Root.crt error adding mozilla/NetLock_Qualified_=Class_QA=_Root.crt error adding mozilla/Network_Solutions_Certificate_Authority.crt error adding mozilla/QuoVadis_Root_CA.crt error adding mozilla/QuoVadis_Root_CA_2.crt error adding mozilla/QuoVadis_Root_CA_3.crt error adding mozilla/RSA_Root_Certificate_1.crt error adding mozilla/RSA_Security_1024_v3.crt error adding mozilla/RSA_Security_2048_v3.crt error adding mozilla/SecureTrust_CA.crt error adding mozilla/Secure_Global_CA.crt error adding mozilla/Security_Communication_Root_CA.crt error adding mozilla/Sonera_Class_1_Root_CA.crt error adding mozilla/Sonera_Class_2_Root_CA.crt error adding mozilla/Staat_der_Nederlanden_Root_CA.crt error adding mozilla/Starfield_Class_2_CA.crt error adding mozilla/StartCom_Certification_Authority.crt error adding mozilla/StartCom_Ltd..crt error adding mozilla/SwissSign_Gold_CA_-_G2.crt error adding mozilla/SwissSign_Platinum_CA_-_G2.crt error adding mozilla/SwissSign_Silver_CA_-_G2.crt error adding mozilla/Swisscom_Root_CA_1.crt error adding mozilla/TC_TrustCenter__Germany__Class_2_CA.crt error adding mozilla/TC_TrustCenter__Germany__Class_3_CA.crt error adding mozilla/TDC_Internet_Root_CA.crt error adding mozilla/TDC_OCES_Root_CA.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt error adding mozilla/Taiwan_GRCA.crt error adding mozilla/Thawte_Personal_Basic_CA.crt error adding mozilla/Thawte_Personal_Freemail_CA.crt error adding mozilla/Thawte_Personal_Premium_CA.crt error adding mozilla/Thawte_Premium_Server_CA.crt error adding mozilla/Thawte_Server_CA.crt error adding mozilla/Thawte_Time_Stamping_CA.crt error adding mozilla/UTN-USER_First-Network_Applications.crt error adding mozilla/UTN_DATACorp_SGC_Root_CA.crt error adding mozilla/UTN_USERFirst_Email_Root_CA.crt error adding mozilla/UTN_USERFirst_Hardware_Root_CA.crt error adding mozilla/ValiCert_Class_1_VA.crt error adding mozilla/ValiCert_Class_2_VA.crt error adding mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_RSA_Secure_Server_CA.crt error adding mozilla/Verisign_Time_Stamping_Authority_CA.crt error adding mozilla/Visa_International_Global_Root_2.crt error adding mozilla/Visa_eCommerce_Root.crt error adding mozilla/WellsSecure_Public_Root_Certificate_Authority.crt error adding mozilla/Wells_Fargo_Root_CA.crt error adding mozilla/XRamp_Global_CA_Root.crt error adding mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt error adding mozilla/beTRUSTed_Root_CA.crt error adding mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt error adding mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt error adding mozilla/thawte_Primary_Root_CA.crt error adding signet.pl/signet_ca1_pem.crt error adding signet.pl/signet_ca2_pem.crt error adding signet.pl/signet_ca3_pem.crt error adding signet.pl/signet_ocspklasa2_pem.crt error adding signet.pl/signet_ocspklasa3_pem.crt error adding signet.pl/signet_pca2_pem.crt error adding signet.pl/signet_pca3_pem.crt error adding signet.pl/signet_rootca_pem.crt error adding signet.pl/signet_tsa1_pem.crt error adding spi-inc.org/spi-ca-2003.crt error adding spi-inc.org/spi-cacert-2008.crt error adding telesec.de/deutsche-telekom-root-ca-2.crt failed (VM used: java-6-openjdk). dpkg: error processing ca-certificates-java (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ca-certificates-java E: Sub-process /usr/bin/dpkg returned an error code (1) Update I also get a problem when running java -version: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. My VPS had 128MB of Memory, I changed to 256MB but got the same problem. Then I changed to 512MB and got the same problem. I found a related post on a forum: Sub-process /usr/bin/dpkg returned an error code (1) And I tried: sudo apt-get clean sudo apt-get --reinstall install openjdk-6-jre sudo dpkg --configure -a But I got the same problem, even when I'm using 512MB of Memory. Any suggestions?

    Read the article

  • Good ol fashioned debugging

    - by Tim Dexter
    I have been helping out one of our new customers over the last day or two and I have even managed to get to the bottom of their problem FTW! They use BIEE and BIP and wanted to mount a BIP report in a dashboard page, so far so good, BIP does that! Just follow the instructions in the BIEE user guide. The wrinkle is that they want to enter some fixed instruction strings into the dashboard prompts to help the user. These are added as fixed values to the prompt as the default values so they appear first. Once the user makes a selection, the default strings disappear. Its a fair requirement but the BIP report chokes Now, the BIP report had been setup with the Autorun checkbox, unchecked. I expected the BIP report to wait for the Go button to be hit but it was trying to run immediately and failing. That was the first issue. You can not stop the BIP report from trying to run in a dashboard. Even if the Autorun is turned off, it seems that dashboard still makes the request to BIP to run the report. Rather than BIP refusing because its waiting for input it goes ahead anyway, I guess the mechanism does not check the autorun flag when the request is coming from the dashboard. It appears that between BIEE and BIP, they collectively ignore the autorun flag. A bug? might be, at least an enhancement request. With that in mind, how could we get BIP to not at least not fail? This fact was stumping me on the parameter error, if the autorun flag was being respected then why was BIP complaining about the parameter values it should not even be doing anything until the Go button is clicked. I now knew that the autorun flag was being ignored, it was a simple case of putting BIP into debug mode. I use the OC4J server on my laptop so debug msgs are routed through the dos box used to start the OC4J container. When I changed a value on the dashboard prompt I spotted some debug text rushing by that subsequently disappeared from the log once the operation was complete. Another bug? I needed to catch that text as it went by, using the print screen function with some software to grab multiple screens as the log appeared and then disappeared. The upshot is that when you change the dashboard prompt value, BIP validates the value against its own LOVs, if its not in the list then it throws the error. Because 'Fill this first' and 'Fill this second' ie fixed strings from the dashboard prompts, are not in the LOV lists and because the report is auto running as soon as the dashboard page is brought up, the report complains about invalid parameters. To get around this, I needed to get the strings into the LOVs. Easily done with a UNION clause: select 'Fill this first' from SH.Products Products UNION select Products."Prod Category" as "Prod Category" from SH.Products Products Now when BIP wants to validate the prompt value, the LOV query fires and finds the fixed string -> No Error. No data, but definitely no errors :0) If users do run with the fixed values, you can capture that in the template. If there is no data in the report, either the fixed values were used or the parameters selected resulted in no rows. You can capture this in the template and display something like. 'Either your parameter values resulted in no data or you have not changed the default values' Thats the upside, the downside is that if your users run the report in the BP UI they re going to see the fixed strings. You could alleviate that by having BIP display the fixed strings in top of its parameter drop boxes (just set them as the default value for the parameter.) But they will not disappear like they do in the dashboard prompts, see below. If the expected autorun behaviour worked ie wait for the Go button, then we would not have to workaround it but for now, its a pretty good solution. It was an enjoyable hour or so for me, took me back to my developer daze, when we used to race each other for the most number of bug fixes. I used to run a distant 2nd behind 'Bugmeister Chen Hu' but led the chasing pack by a reasonable distance.

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • ASP.NET MVC: Converting business objects to select list items

    - by DigiMortal
    Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code. BusinessBase, BaseEntity and other base classes I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need. NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :) To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it. public class BaseEntity {     public virtual long Id { get; set; } } Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes. public class Product : BaseEntity {     public virtual string SKU { get; set; }     public virtual string Name { get; set; }       public override string ToString()     {         if (string.IsNullOrEmpty(Name))             return base.ToString();           return Name;     } } Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects. Writing list items converter Now we can write method that creates list items for us. public static class BaseEntityExtensions {            public static IEnumerable<SelectListItem> ToSelectListItems<T>         (this IList<T> baseEntities) where T : BaseEntity     {         return ToSelectListItems((IEnumerator<BaseEntity>)                    baseEntities.GetEnumerator());     }       public static IEnumerable<SelectListItem> ToSelectListItems         (this IEnumerator<BaseEntity> baseEntities)     {         var items = new HashSet<SelectListItem>();           while (baseEntities.MoveNext())         {             var item = new SelectListItem();             var entity = baseEntities.Current;               item.Value = entity.Id.ToString();             item.Text = entity.ToString();               items.Add(item);         }           return items;     } } You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces. Using extension methods in code In your code you can use ToSelectListItems() extension methods like shown on following code fragment. ... var model = new MyFormModel(); model.Statuses = _myRepository.ListStatuses().ToSelectListItems(); ... You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.

    Read the article

  • Trying to keep up with Technology and Blogging

    - by Dave Campbell
    A little bit of everything... The heading above got changed a bunch during writing and I finally settled on that because this has become a 'stream of consciousness' post... or maybe a stream of UNconsciousness :) If you've noticed, my blogging has been a tad slow this fall. There's been a lot going on personally. But then again, I haven't skipped anybody either. Rather than go through ALL the blogs I have aggregated, and take a week to get to the bottom, at some point in the last year, I had moved the lists around so I now have "SilverlightMVPs", "Very Prolific", "WP7", and "Top Checks". This is a total of about 250 of the more prolific bloggers. Those 250 bloggers have kept me very busy up through about //BUILD. Sometimes it would take all week to go through just that list putting out 13 posts per blog per day... but not anymore. This weekend I made it all the way through the BIG list... close to 700 blogs, and if you read my blog, you know I had one medium day (Saturday), and yesterday was very short. Why is this? To be honest, I don't know... is everybody busy re-tooling, or churning waiting for direction? I have a short list of WinRT/Metro/W8 folks... maybe I need to be pointed to more of them... but my old favorites are not pumping out posts as they have in the past. I said before that I am attracted to Metro, and I've already got My first Metro app post out there, and were it not for working with the new site, I'd have had another out last weekend... so definitely look for more from me in that area. New Site? Did I say 'new site' ? oops... didn't mean to do that, but now that the cat is out of the bag, I may as well continue... While at //BUILD, I discussed a re-tooling of SilverlightCream with lots of folks... probably more than wanted to hear about it to be honest! ... it's needed a facelift, and there's stuff on there that never worked right, plus there's a lot of manual effort that goes into a blog post. In an effort to alleviate all the above, Michael Washington and I have been working on the next iteration of SilverlightCream. Not wanting to lose that branding or mess with any saved links, I decided to change from a somewhat funky name to something more professional. I also decided to put my blog on the site, and tie my main announcement twitter feed to the site as well. The way things sit today, there are 3 different names in those locations and it's gotta be confusing for folks just stumbling in. We're going to do a series of posts talking about the site and the new backend processing (hint: Michael Washington is responsible for it, so you can take a guess at the technology), but for now, we'd like some eyes on the front end of the site, and some submittals using it to see if it falls over somewhere that we haven't tried. So... I'm going to give it up... the new site is Windows Dev News. The Twitter feed is @WindowsDevNews, and the blog will be on the site as well at Windows Dev News Blog. I've got the RSS Feed on Feedburner too, so I think all the nuts and bolts are good to go. The submittal and search pages work, as does the blog page. You'll notice we used the MasterPage from SilverlightCream to get started. That will probably change, but it's just the visual... the content is the important part. Other missing things are the tracking and 'Skim' page that we will eventually have up and running. There are some formatting issues with the blog posts but if you hang in there with me, those will be taken care of. If you're a blogger, please submit through the site and let me know if you find any problems. If you're a reader, please add this feed and site. I'll be duplicating the effort for a while but at some point will stop that foolishness. We won't lose the data from SilverlightCream though, so keep using that as a search resource... I have hopes to pull that database over to WindowsDevNews, or link to it in some manner... that part isn't set in jello yet, but it will not be lost. So there it is... let me know what you think, send me your WinRT/Metro/W8 postings along with your Silverlight and WP7 posts... it's not that different, it's just more. Stay in the 'Light

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How to find domain registrar and DNS hosting with good DNSSEC support?

    - by rsp
    Simplified problem I want to buy a domain and make a website that is fully secured with DNSSEC. Background I've been hearing about the insecurity of DNS for years. I've watched all of the talks by Dan Kaminsky and others from DNS exploits to The future of DNS Security Panel. I knew that using DNS without security is a disaster waiting to happen. I followed the development of the DNSSEC standard. I celebrated the key signing ceremony. Everything was on the right track to finally have a secure DNS system in place. And now more than 2 years later I wanted to just do what everyone said I should do: use DNSSEC for a new domain. So I need a domain registrar and a DNS hosting service that supports DNSSEC. Surprisingly it is not that easy to even find out who does support DNSSEC. It was actually much easier to find info on DNSSEC two years ago when everyone was going to support DNSSEC Real Soon Now but now years passed and I hardly see any progress done. I just hope that I was just looking in the wrong places and someone here will explain all of the doubts. I hope that other people who want to have a secure website will also find this question useful. What is needed registrar and DNS servers with full DNSSEC support for .com domains What is not needed IPv6 support Web hosting anything more What I found out so far Go Daddy offers Premium DNS service for additional $36 per year that lets you "Secure up to 5 domains with DNSSEC". easyDNS has DNSSEC available in Beta across all service levels (you need to enable the "beta" flag in configuration) but it doesn't seem to be production ready and judging from the lack of updates it isn't a feature of highest priority (the last update from March 2011 on the easyDNS blog). Name.com - according to The Register (US domain registrar does IPv6, DNSSEC) it has DNSSEC support since 2010 but right now (October 2012) I couldn't find anything related to DNSSEC on their website. Dynadot that is very often recommended doesn't support DNSSEC Namecheap that is also often recommended doesn't support DNSSEC. The support answer from 2011 suggested that it was being added but in 2012 still no ETA is given to customers. DynDNS was supposed to support DNSSEC, I found a link explaining DNSSEC support but it gives 404 Not Found page and offers a search box - when searching for DNSSEC I get "No results were found for your query." GKG was recommended online for DNSSEC support but it's hard to find any information on the level of DNSSEC support - there is a brief explanation on what is DNSSEC and how to sign Delegation Signer records in their FAQ but no information about the level of actual support can be found. Ask Slashdot: Which Registrars Support DNSSEC? from July 2011 - Answers list Go Daddy, DynDNS, GKG, Name.com as registrars that support DNSSEC but: see above. Related questions How to find web hosting that meets my requirements? What is needed to add DNSSEC to my site? DNS hosting better managed by Domain provider or Hosting provider? Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers? In no. 1 no one is ever mentioning DNS at all. In no. 2 answers only mention the .se TLD, there are very few answers and they seem very outdated. In no. 3 one answer says "On projects that demand higher security, I might look for a web host that supports DNSSEC" but no more information is provided. The only relevant answers are in no. 4 where easyDNS is recommended by someone who has never used them personally. Meanwhile, as of October 2012, the support of DNSSEC is described as "in beta" on the easyDNS feature list. Another one recommends SiteGround but searching their site for DNSSEC returns no results. Other answers recommend web hosting providers that don't meet the requirement of DNSSEC support. Also the question mentioned above lists 9 very specific requirements other than only DNSSEC (like eg. HTTP-only login cookies, two-factor authentications, no DNS record limits, DNS statistics of queries/day, audit trails etc.) which might have excluded many possible recommendations if one is only interested in DNSSEC support. Conclusions I thought that by the end of 2012 the support of DNSSEC among domain registrars and DNS providers would be nearly universal. I am shocked that the support seems virtually nonexistent. Is this a result of some serious problems with the DNSSEC adoption? Or is it just not a hot topic and no one bothers anymore? According to the DNSSEC Scoreboard roughly about 0.1% of .com domains support DNSSEC. Could that be caused by the lack of DNSSEC support among registrars and DNS providers, is the information too hard to find or maybe no one cares? There is even no "dnssec" tag here. Questions The information is surprisingly hard to find. That is why I am asking for first-hand experience and personal recommendations. Has anyone here actually set up a website with DNSSEC, from the domain registration to the configuration of DNS servers? Can anyone recommend any of the registrars mentioned above? Can anyone recommend any registrar not mentioned above?

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Followup: Python 2.6, 3 abstract base class misunderstanding

    - by Aaron
    I asked a question at Python 2.6, 3 abstract base class misunderstanding. My problem was that python abstract base classes didn't work quite the way I expected them to. There was some discussion in the comments about why I would want to use ABCs at all, and Alex Martelli provided an excellent answer on why my use didn't work and how to accomplish what I wanted. Here I'd like to address why one might want to use ABCs, and show my test code implementation based on Alex's answer. tl;dr: Code after the 16th paragraph. In the discussion on the original post, statements were made along the lines that you don't need ABCs in Python, and that ABCs don't do anything and are therefore not real classes; they're merely interface definitions. An abstract base class is just a tool in your tool box. It's a design tool that's been around for many years, and a programming tool that is explicitly available in many programming languages. It can be implemented manually in languages that don't provide it. An ABC is always a real class, even when it doesn't do anything but define an interface, because specifying the interface is what an ABC does. If that was all an ABC could do, that would be enough reason to have it in your toolbox, but in Python and some other languages they can do more. The basic reason to use an ABC is when you have a number of classes that all do the same thing (have the same interface) but do it differently, and you want to guarantee that that complete interface is implemented in all objects. A user of your classes can rely on the interface being completely implemented in all classes. You can maintain this guarantee manually. Over time you may succeed. Or you might forget something. Before Python had ABCs you could guarantee it semi-manually, by throwing NotImplementedError in all the base class's interface methods; you must implement these methods in derived classes. This is only a partial solution, because you can still instantiate such a base class. A more complete solution is to use ABCs as provided in Python 2.6 and above. Template methods and other wrinkles and patterns are ideas whose implementation can be made easier with full-citizen ABCs. Another idea in the comments was that Python doesn't need ABCs (understood as a class that only defines an interface) because it has multiple inheritance. The implied reference there seems to be Java and its single inheritance. In Java you "get around" single inheritance by inheriting from one or more interfaces. Java uses the word "interface" in two ways. A "Java interface" is a class with method signatures but no implementations. The methods are the interface's "interface" in the more general, non-Java sense of the word. Yes, Python has multiple inheritance, so you don't need Java-like "interfaces" (ABCs) merely to provide sets of interface methods to a class. But that's not the only reason in software development to use ABCs. Most generally, you use an ABC to specify an interface (set of methods) that will likely be implemented differently in different derived classes, yet that all derived classes must have. Additionally, there may be no sensible default implementation for the base class to provide. Finally, even an ABC with almost no interface is still useful. We use something like it when we have multiple except clauses for a try. Many exceptions have exactly the same interface, with only two differences: the exception's string value, and the actual class of the exception. In many exception clauses we use nothing about the exception except its class to decide what to do; catching one type of exception we do one thing, and another except clause catching a different exception does another thing. According to the exception module's doc page, BaseException is not intended to be derived by any user defined exceptions. If ABCs had been a first class Python concept from the beginning, it's easy to imagine BaseException being specified as an ABC. But enough of that. Here's some 2.6 code that demonstrates how to use ABCs, and how to specify a list-like ABC. Examples are run in ipython, which I like much better than the python shell for day to day work; I only wish it was available for python3. Your basic 2.6 ABC: from abc import ABCMeta, abstractmethod class Super(): __metaclass__ = ABCMeta @abstractmethod def method1(self): pass Test it (in ipython, python shell would be similar): In [2]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods method1 Notice the end of the last line, where the TypeError exception tells us that method1 has not been implemented ("abstract methods method1"). That was the method designated as @abstractmethod in the preceding code. Create a subclass that inherits Super, implement method1 in the subclass and you're done. My problem, which caused me to ask the original question, was how to specify an ABC that itself defines a list interface. My naive solution was to make an ABC as above, and in the inheritance parentheses say (list). My assumption was that the class would still be abstract (can't instantiate it), and would be a list. That was wrong; inheriting from list made the class concrete, despite the abstract bits in the class definition. Alex suggested inheriting from collections.MutableSequence, which is abstract (and so doesn't make the class concrete) and list-like. I used collections.Sequence, which is also abstract but has a shorter interface and so was quicker to implement. First, Super derived from Sequence, with nothing extra: from abc import abstractmethod from collections import Sequence class Super(Sequence): pass Test it: In [6]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods __getitem__, __len__ We can't instantiate it. A list-like full-citizen ABC; yea! Again, notice in the last line that TypeError tells us why we can't instantiate it: __getitem__ and __len__ are abstract methods. They come from collections.Sequence. But, I want a bunch of subclasses that all act like immutable lists (which collections.Sequence essentially is), and that have their own implementations of my added interface methods. In particular, I don't want to implement my own list code, Python already did that for me. So first, let's implement the missing Sequence methods, in terms of Python's list type, so that all subclasses act as lists (Sequences). First let's see the signatures of the missing abstract methods: In [12]: help(Sequence.__getitem__) Help on method __getitem__ in module _abcoll: __getitem__(self, index) unbound _abcoll.Sequence method (END) In [14]: help(Sequence.__len__) Help on method __len__ in module _abcoll: __len__(self) unbound _abcoll.Sequence method (END) __getitem__ takes an index, and __len__ takes nothing. And the implementation (so far) is: from abc import abstractmethod from collections import Sequence class Super(Sequence): # Gives us a list member for ABC methods to use. def __init__(self): self._list = [] # Abstract method in Sequence, implemented in terms of list. def __getitem__(self, index): return self._list.__getitem__(index) # Abstract method in Sequence, implemented in terms of list. def __len__(self): return self._list.__len__() # Not required. Makes printing behave like a list. def __repr__(self): return self._list.__repr__() Test it: In [34]: a = Super() In [35]: a Out[35]: [] In [36]: print a [] In [37]: len(a) Out[37]: 0 In [38]: a[0] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() /home/aaron/projects/test/test.py in __getitem__(self, index) 10 # Abstract method in Sequence, implemented in terms of list. 11 def __getitem__(self, index): ---> 12 return self._list.__getitem__(index) 13 14 # Abstract method in Sequence, implemented in terms of list. IndexError: list index out of range Just like a list. It's not abstract (for the moment) because we implemented both of Sequence's abstract methods. Now I want to add my bit of interface, which will be abstract in Super and therefore required to implement in any subclasses. And we'll cut to the chase and add subclasses that inherit from our ABC Super. from abc import abstractmethod from collections import Sequence class Super(Sequence): # Gives us a list member for ABC methods to use. def __init__(self): self._list = [] # Abstract method in Sequence, implemented in terms of list. def __getitem__(self, index): return self._list.__getitem__(index) # Abstract method in Sequence, implemented in terms of list. def __len__(self): return self._list.__len__() # Not required. Makes printing behave like a list. def __repr__(self): return self._list.__repr__() @abstractmethod def method1(): pass class Sub0(Super): pass class Sub1(Super): def __init__(self): self._list = [1, 2, 3] def method1(self): return [x**2 for x in self._list] def method2(self): return [x/2.0 for x in self._list] class Sub2(Super): def __init__(self): self._list = [10, 20, 30, 40] def method1(self): return [x+2 for x in self._list] We've added a new abstract method to Super, method1. This makes Super abstract again. A new class Sub0 which inherits from Super but does not implement method1, so it's also an ABC. Two new classes Sub1 and Sub2, which both inherit from Super. They both implement method1 from Super, so they're not abstract. Both implementations of method1 are different. Sub1 and Sub2 also both initialize themselves differently; in real life they might initialize themselves wildly differently. So you have two subclasses which both "is a" Super (they both implement Super's required interface) although their implementations are different. Also remember that Super, although an ABC, provides four non-abstract methods. So Super provides two things to subclasses: an implementation of collections.Sequence, and an additional abstract interface (the one abstract method) that subclasses must implement. Also, class Sub1 implements an additional method, method2, which is not part of Super's interface. Sub1 "is a" Super, but it also has additional capabilities. Test it: In [52]: a = Super() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Super with abstract methods method1 In [53]: a = Sub0() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: Can't instantiate abstract class Sub0 with abstract methods method1 In [54]: a = Sub1() In [55]: a Out[55]: [1, 2, 3] In [56]: b = Sub2() In [57]: b Out[57]: [10, 20, 30, 40] In [58]: print a, b [1, 2, 3] [10, 20, 30, 40] In [59]: a, b Out[59]: ([1, 2, 3], [10, 20, 30, 40]) In [60]: a.method1() Out[60]: [1, 4, 9] In [61]: b.method1() Out[61]: [12, 22, 32, 42] In [62]: a.method2() Out[62]: [0.5, 1.0, 1.5] [63]: a[:2] Out[63]: [1, 2] In [64]: a[0] = 5 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/aaron/projects/test/<ipython console> in <module>() TypeError: 'Sub1' object does not support item assignment Super and Sub0 are abstract and can't be instantiated (lines 52 and 53). Sub1 and Sub2 are concrete and have an immutable Sequence interface (54 through 59). Sub1 and Sub2 are instantiated differently, and their method1 implementations are different (60, 61). Sub1 includes an additional method2, beyond what's required by Super (62). Any concrete Super acts like a list/Sequence (63). A collections.Sequence is immutable (64). Finally, a wart: In [65]: a._list Out[65]: [1, 2, 3] In [66]: a._list = [] In [67]: a Out[67]: [] Super._list is spelled with a single underscore. Double underscore would have protected it from this last bit, but would have broken the implementation of methods in subclasses. Not sure why; I think because double underscore is private, and private means private. So ultimately this whole scheme relies on a gentleman's agreement not to reach in and muck with Super._list directly, as in line 65 above. Would love to know if there's a safer way to do that.

    Read the article

  • How to get sound on macbook pro 4,1

    - by Thomas
    I have just installed Xubuntu 12.04.2. My soundcard is detected: thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 Everything is put to max in alsamixer and nothing is muted (all the sliders are on OO. My speakers do not work, but when I plug in a headphone I hear it very soft. When I connect my stereo and put the sound VERY loud (3-blocks-of-complaining-neighbours loud) I hear it on a normal level but crackling. I added options snd-hda-intel model=mbp5 amixer set IEC958 off to at the end of /etc/modprobe.d/alsa-base.conf. When it's still not working I tried everything here: https://help.ubuntu.com/community/SoundTroubleshooting 1 >>> list-sinks 1 sink(s) available. * index: 0 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE priority: 9959 volume: 0: 100% 1: 100% 0: 0.00 dB 1: 0.00 dB balance 0.00 base volume: 100% 0.00 dB volume steps: 65537 muted: no current latency: 0.00 ms max request: 0 KiB max rewind: 0 KiB monitor source: 0 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stereo used by: 0 linked by: 0 configured latency: 0.00 ms; range is 0.50 .. 371.52 ms card: 0 <alsa_card.pci-0000_00_1b.0> module: 4 properties: alsa.resolution_bits = "16" device.api = "alsa" device.class = "sound" alsa.class = "generic" alsa.subclass = "generic-mix" alsa.name = "ALC889A Analog" alsa.id = "ALC889A Analog" alsa.subdevice = "0" alsa.subdevice_name = "subdevice #0" alsa.device = "0" alsa.card = "0" alsa.card_name = "HDA Intel" alsa.long_card_name = "HDA Intel at 0x9b500000 irq 46" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1b.0" sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.name = "82801H (ICH8 Family) HD Audio Controller" device.form_factor = "internal" device.string = "front:0" device.buffering.buffer_size = "65536" device.buffering.fragment_size = "32768" device.access_mode = "mmap+timer" device.profile.name = "analog-stereo" device.profile.description = "Analog Stereo" device.description = "Built-in Audio Analog Stereo" alsa.mixer_name = "Realtek ALC889A" alsa.components = "HDA:10ec0885,106b3a00,00100103" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" ports: analog-output-speaker: Speakers (priority 10000, available: unknown) properties: analog-output-headphones: Headphones (priority 9000, available: no) properties: active port: <analog-output-speaker> 2 and 3: Doesn't seem an permission issue, the sound is very far away (See opening paragraph). 4 thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 5 thomas@thomas-pc:~$ find /lib/modules/`uname -r` | grep snd /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-pcm.ko [.. huge lists continues ..] /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/vx/snd-vxpocket.ko thomas@thomas-pc:~$ 6 thomas@thomas-pc:~$ lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) Subsystem: Apple Inc. Device 00a4 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at 9b500000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 7 I guess it's supported. Linux mint and Xubuntu 13.04 had no trouble with sounds. Everything worked out of the box Thanks in advance Edit: alsa-info.sh output: WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' ALSA Information Script v 0.4.62 -------------------------------- This script visits the following commands/files to collect diagnostic information about your ALSA installation and sound related hardware. dmesg lspci lsmod aplay amixer alsactl /proc/asound/ /sys/class/sound/ ~/.asoundrc (etc.) See './alsa-info.sh --help' for command line options. WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' Automatically upload ALSA information to www.alsa-project.org? [y/N] : y Uploading information to www.alsa-project.org ... Done! Your ALSA information is located at http://www.alsa-project.org/db/?f=6cffc584284d4c0b266eb53249824ef83d6c4e3e Please inform the person helping you. thomas@thomas-pc:~$

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • Out of disk space - /boot at 100%

    - by uvasal
    My /boot is at 100%. When I run aptitude search ~ilinux-image I'm getting loads of unused images. When I try to delete one of them (after checking which one is currently in use by doing uname -r), e.g apt-get autoremove linux-image-3.2.0-44-generic I get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed linux-server : Depends: linux-headers-server (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And running apt-get -f install throws No space left on device. I've also tried doing apt-get purge but I am getting the same thing. Output of df -h and dpkg -l linux-*.: root@hb2088:/srv/www# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 9.4G 3.0G 6.0G 34% / udev 301M 4.0K 301M 1% /dev tmpfs 124M 228K 124M 1% /run none 5.0M 0 5.0M 0% /run/lock none 309M 0 309M 0% /run/shm /dev/sda1 92M 91M 0 100% /boot root@hb2088:/srv/www# dpkg -l linux-* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-====================================================-====================================================-======================================================================================================================== un linux-doc-3.2.0 <none> (no description available) ii linux-firmware 1.79.6 Firmware for Linux kernel drivers iU linux-generic 3.2.0.51.61 Complete Generic Linux kernel un linux-headers <none> (no description available) un linux-headers-3 <none> (no description available) un linux-headers-3.0 <none> (no description available) ii linux-headers-3.2.0-44 3.2.0-44.69 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-44-generic 3.2.0-44.69 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-45 3.2.0-45.70 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-45-generic 3.2.0-45.70 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-48 3.2.0-48.74 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-48-generic 3.2.0-48.74 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-51 3.2.0-51.77 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-51-generic 3.2.0-51.77 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-52 3.2.0-52.78 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-52-generic 3.2.0-52.78 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-3.2.0-54 3.2.0-54.82 Header files related to Linux kernel version 3.2.0 iU linux-headers-3.2.0-54-generic 3.2.0-54.82 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-generic 3.2.0.54.64 Generic Linux kernel headers iU linux-headers-server 3.2.0.54.64 Linux kernel headers on Server Equipment. un linux-image <none> (no description available) un linux-image-3.0 <none> (no description available) ii linux-image-3.2.0-44-generic 3.2.0-44.69 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-45-generic 3.2.0-45.70 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-48-generic 3.2.0-48.74 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-51-generic 3.2.0-51.77 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-52-generic 3.2.0-52.78 Linux kernel image for version 3.2.0 on 64 bit x86 SMP in linux-image-3.2.0-54-generic <none> (no description available) iU linux-image-generic 3.2.0.51.61 Generic Linux kernel image iU linux-image-server 3.2.0.51.61 Linux kernel image on Server Equipment. un linux-initramfs-tool <none> (no description available) un linux-kernel-headers <none> (no description available) un linux-kernel-log-daemon <none> (no description available) ii linux-libc-dev 3.2.0-52.78 Linux Kernel Headers for development un linux-restricted-common <none> (no description available) iU linux-server 3.2.0.51.61 Complete Linux kernel on Server Equipment. un linux-source-3.2.0 <none> (no description available) un linux-tools <none> (no description available) Output of du -sh /boot/*: root@hb2088:~# du -sh /boot/* 781K /boot/abi-3.2.0-44-generic 781K /boot/abi-3.2.0-45-generic 781K /boot/abi-3.2.0-48-generic 781K /boot/abi-3.2.0-51-generic 781K /boot/abi-3.2.0-52-generic 139K /boot/config-3.2.0-44-generic 139K /boot/config-3.2.0-45-generic 139K /boot/config-3.2.0-48-generic 139K /boot/config-3.2.0-51-generic 139K /boot/config-3.2.0-52-generic 1.6M /boot/grub 14M /boot/initrd.img-3.2.0-44-generic 14M /boot/initrd.img-3.2.0-45-generic 14M /boot/initrd.img-3.2.0-48-generic 12K /boot/lost+found 174K /boot/memtest86+.bin 176K /boot/memtest86+_multiboot.bin 2.8M /boot/System.map-3.2.0-44-generic 2.8M /boot/System.map-3.2.0-45-generic 2.8M /boot/System.map-3.2.0-48-generic 2.8M /boot/System.map-3.2.0-51-generic 2.8M /boot/System.map-3.2.0-52-generic 4.8M /boot/vmlinuz-3.2.0-44-generic 4.8M /boot/vmlinuz-3.2.0-45-generic 4.8M /boot/vmlinuz-3.2.0-48-generic 4.8M /boot/vmlinuz-3.2.0-51-generic 4.8M /boot/vmlinuz-3.2.0-52-generic

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Passing the CAML thru the EY of the NEEDL

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Passing the CAML thru the EY of the NEEDL Definitions: CAML (Collaborative Application Markup Language) is an XML based markup language used in Microsoft SharePoint technologies  Anonymous: A camel is a horse designed by committee  Dov Trietsch: A CAML is a HORS designed by Microsoft  I was advised against putting any Camel and Sphinx rhymes in here. Look it up in Google!  _____ Now that we have dispensed with the dromedary jokes (BTW, I have many more, but they are not fit to print!), here is an interesting problem and its solution.  We have built a list where the title must be kept unique so I needed to verify the existence (or absence) of a list item with a particular title. Two methods came to mind:  1: Span the list until the title is found (result = found) or until the list ends (result = not found). This is an algorithm of complexity O(N) and for long lists it is a performance sucker. 2: Use a CAML query instead. Here, for short list we’ll encounter some overhead, but because the query results in an SQL query on the content database, it is of complexity O(LogN), which is significantly better and scales perfectly. Obviously I decided to go with the latter and this is where the CAML s--t hit the fan.   A CAML query returns a SPListItemCollection and I simply checked its Count. If it was 0, the item did not already exist and it was safe to add a new item with the given title. Otherwise I cancelled the operation and warned the user. The trouble was that I always got a positive. Most of the time a false positive. The count was greater than 0 regardles of the title I checked (except when the list was empty, which happens only once). This was very disturbing indeed. To solve my immediate problem which was speedy delivery, I reverted to the “Span the list” approach, but the problem bugged me, so I wrote a little console app by which I tested and tweaked and tested, time and again, until I found the solution. Yes, one can pass the proverbial CAML thru the ey of the needle (e’s missing on purpose).  So here are my conclusions:  CAML that does not work:  Note: QT is my quote:  char QT = Convert.ToChar((int)34); string titleQuery = "<Query>><Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where></Query>"; titleQuery += "<ViewFields><FieldRef Name=" + QT + "Title" + QT + "/></ViewFields>";  Why? Even though U2U generates it, the <Query> and </Query> tags do not belong in the query that you pass. Start your query with the <Where> clause.  Also the <ViewFiels> clause does not belong. I used this clause to limit the returned collection to a single column, and I still wish to do it. I’ll show how this is done a bit later.   When you use the <Query> </Query> tags in you query, it’s as if you did not specify the query at all. What you get is the all inclusive default query for the list. It returns evey column and every item. It is expensive for both server and network because it does all the extra processing and eats plenty of bandwidth.   Now, here is the CAML that works  string titleQuery = "<Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where>";  You’ll also notice that inside the unusable <ViewFields> clause above, we have a <FieldRef> clause. This is what we pass to the SPQuery object. Here is how:  SPQuery query = new SPQuery(); query.Query = titleQuery; query.ViewFields = "<FieldRef Name=" + QT + "Title" + QT + "/>"; query.RowLimit = 1; SPListItemCollection col = masterList.GetItems(query);  Two thing to note: we enter the view fields into the SPQuery object and we also limited the number of rows that the query returns. The latter is not always done, but in an existence test, there is no point in returning hundreds of rows. The query will now return one item or none, which is all we need in order to verify the existence (or non-existence) of items. Limiting the number of columns and the number of rows is a great performance enhancer. That’s all folks!!

    Read the article

  • F# for the C# Programmer

    - by mbcrump
    Are you a C# Programmer and can’t make it past a day without seeing or hearing someone mention F#?  Today, I’m going to walk you through your first F# application and give you a brief introduction to the language. Sit back this will only take about 20 minutes. Introduction Microsoft's F# programming language is a functional language for the .NET framework that was originally developed at Microsoft Research Cambridge by Don Syme. In October 2007, the senior vice president of the developer division at Microsoft announced that F# was being officially productized to become a fully supported .NET language and professional developers were hired to create a team of around ten people to build the product version. In September 2008, Microsoft released the first Community Technology Preview (CTP), an official beta release, of the F# distribution . In December 2008, Microsoft announced that the success of this CTP had encouraged them to escalate F# and it is now will now be shipped as one of the core languages in Visual Studio 2010 , alongside C++, C# 4.0 and VB. The F# programming language incorporates many state-of-the-art features from programming language research and ossifies them in an industrial strength implementation that promises to revolutionize interactive, parallel and concurrent programming. Advantages of F# F# is the world's first language to combine all of the following features: Type inference: types are inferred by the compiler and generic definitions are created automatically. Algebraic data types: a succinct way to represent trees. Pattern matching: a comprehensible and efficient way to dissect data structures. Active patterns: pattern matching over foreign data structures. Interactive sessions: as easy to use as Python and Mathematica. High performance JIT compilation to native code: as fast as C#. Rich data structures: lists and arrays built into the language with syntactic support. Functional programming: first-class functions and tail calls. Expressive static type system: finds bugs during compilation and provides machine-verified documentation. Sequence expressions: interrogate huge data sets efficiently. Asynchronous workflows: syntactic support for monadic style concurrent programming with cancellations. Industrial-strength IDE support: multithreaded debugging, and graphical throwback of inferred types and documentation. Commerce friendly design and a viable commercial market. Lets try a short program in C# then F# to understand the differences. Using C#: Create a variable and output the value to the console window: Sample Program. using System;   namespace ConsoleApplication9 {     class Program     {         static void Main(string[] args)         {             var a = 2;             Console.WriteLine(a);             Console.ReadLine();         }     } } A breeze right? 14 Lines of code. We could have condensed it a bit by removing the “using” statment and tossing the namespace. But this is the typical C# program. Using F#: Create a variable and output the value to the console window: To start, open Visual Studio 2010 or Visual Studio 2008. Note: If using VS2008, then please download the SDK first before getting started. If you are using VS2010 then you are already setup and ready to go. So, click File-> New Project –> Other Languages –> Visual F# –> Windows –> F# Application. You will get the screen below. Go ahead and enter a name and click OK. Now, you will notice that the Solution Explorer contains the following: Double click the Program.fs and enter the following information. Hit F5 and it should run successfully. Sample Program. open System let a = 2        Console.WriteLine a As Shown below: Hmm, what? F# did the same thing in 3 lines of code. Show me the interactive evaluation that I keep hearing about. The F# development environment for Visual Studio 2010 provides two different modes of execution for F# code: Batch compilation to a .NET executable or DLL. (This was accomplished above). Interactive evaluation. (Demo is below) The interactive session provides a > prompt, requires a double semicolon ;; identifier at the end of a code snippet to force evaluation, and returns the names (if any) and types of resulting definitions and values. To access the F# prompt, in VS2010 Goto View –> Other Window then F# Interactive. Once you have the interactive window type in the following expression: 2+3;; as shown in the screenshot below: I hope this guide helps you get started with the language, please check out the following books for further information. F# Books for further reading   Foundations of F# Author: Robert Pickering An introduction to functional programming with F#. Including many samples, this book walks through the features of the F# language and libraries, and covers many of the .NET Framework features which can be leveraged with F#.       Functional Programming for the Real World: With Examples in F# and C# Authors: Tomas Petricek and Jon Skeet An introduction to functional programming for existing C# developers written by Tomas Petricek and Jon Skeet. This book explains the core principles using both C# and F#, shows how to use functional ideas when designing .NET applications and presents practical examples such as design of domain specific language, development of multi-core applications and programming of reactive applications.

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • [GEEK SCHOOL] Network Security 3: Windows Defender and a Malware-Free System

    - by Ciprian Rusen
    In this second lesson we are going to talk about one of the most confusing security products that are bundled with Windows: Windows Defender. In the past, this product has had a bad reputation and for good reason – it was very limited in its capacity to protect your computer from real-world malware. However, the latest version included in Windows 8.x operating systems is much different than in the past and it provides real protection to its users. The nice thing about Windows Defender in its current incarnation, is that it protects your system from the start, so there are never gaps in coverage. We will start this lesson by explaining what Windows Defender is in Windows 7 and Vista versus what it is in Windows 8, and what product to use if you are using an earlier version. We next will explore how to use Windows Defender, how to improve its default settings, and how to deal with the alerts that it displays. As you will see, Windows Defender will have you using its list of quarantined items a lot more often than other security products. This is why we will explain in detail how to work with it and remove malware for good or restore those items that are only false alarms. Lastly, you will learn how to turn off Windows Defender if you no longer want to use it and you prefer a third-party security product in its place and then how to enable it back, if you have changed your mind about using it. Upon completion, you should have a thorough understanding of your system’s default anti-malware options, or how to protect your system expeditiously. What is Windows Defender? Unfortunately there is no one clear answer to this question because of the confusing way Microsoft has chosen to name its security products. Windows Defender is a different product, depending on the Windows operating system you are using. If you use Windows Vista or Windows 7, then Windows Defender is a security tool that protects your computer from spyware. This but one form of malware made out of tools and applications that monitor your movements on the Internet or the activities you make on your computer. Spyware tends to send the information that is collected to a remote server and it is later used in all kinds of malicious purposes, from displaying advertising you don’t want, to using your personal data, etc. However, there are many other types of malware on the Internet and this version of Windows Defender is not able to protect users from any of them. That’s why, if you are using Windows 7 or earlier, we strongly recommend that you disable Windows Defender and install a more complete security product like Microsoft Security Essentials, or third-party security products from specialized security vendors. If you use Windows 8.x operating systems, then Windows Defender is the same thing as Microsoft Security Essentials: a decent security product that protects your computer in-real time from viruses and spyware. The fact that this product protects your computer also from viruses, not just from spyware, makes a huge difference. If you don’t want to pay for security products, Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) are good alternatives. Windows Defender in Windows 8.x and Microsoft Security Essentials are the same product, only their name is different. In this lesson, we will use the Windows Defender version from Windows 8.x but our instructions apply also to Microsoft Security Essentials (MSE) in Windows 7 and Windows Vista. If you want to download Microsoft Security Essentials and try it out, we recommend you to use this page: Download Microsoft Security Essentials. There you will find both 32-bit and 64-bit editions of this product as well versions in multiple languages. How to Use and Configure Windows Defender Using Windows Defender (MSE) is very easy to use. To start, search for “defender” on the Windows 8.x Start screen and click or tap the “Windows Defender” search result. In Windows 7, search for “security” in the Start Menu search box and click “Microsoft Security Essentials”. Windows Defender has four tabs which give you access to the following tools and options: Home – here you can view the security status of your system. If everything is alright, then it will be colored in green. If there are some warnings to consider, then it will be colored in yellow, and if there are threats that must be dealt with, everything will be colored in red. On the right side of the “Home” tab you will find options for scanning your computer for viruses and spyware. On the bottom of the tab you will find information about when the last scan was performed and what type of scan it was. Update – here you will find information on whether this product is up-to-date. You will learn when it was last updated and the versions of the definitions it is using. You can also trigger a manual update. History – here you can access quarantined items, see which items you’ve allowed to run on your PC even if they were identified as malware by Windows Defender, and view a complete list with all the malicious items Windows Defender has detected on your PC. In order to access all these lists and work with them, you need to be signed in as an administrator. Settings – this is the tab where you can turn on the real-time protection service, exclude files, file types, processes, and locations from its scans as well as access a couple of more advanced settings. The only difference between Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) is that, in the “Settings” tab, Microsoft Security Essentials allows you to set when to run scheduled scans while Windows Defender lacks this option.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >