Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 465/578 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Google chrome cannot be installed

    - by Zxy
    I downloaded latest version of google chrome and then tried to install it. However it gave me errors. I searched through the net and noticed that most of the people's problem solved when they installed missing dependecies. Therefore I tried to install them too but seems like it does not work. zero@ubuntu:~/Downloads$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: google-chrome-stable:i386 0 upgraded, 0 newly installed, 1 to remove and 23 not upgraded. 1 not fully installed or removed. After this operation, 116 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 169296 files and directories currently installed.) Removing google-chrome-stable:i386 ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... zero@ubuntu:~/Downloads$ sudo dpkg -i google-chrome-stable_current_i386.deb Selecting previously unselected package google-chrome-stable:i386. (Reading database ... 169201 files and directories currently installed.) Unpacking google-chrome-stable:i386 (from google-chrome-stable_current_i386.deb) ... dpkg: dependency problems prevent configuration of google-chrome-stable:i386: google-chrome-stable:i386 depends on xdg-utils (>= 1.0.2). dpkg: error processing google-chrome-stable:i386 (--install): dependency problems - leaving unconfigured Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: google-chrome-stable:i386 Could you please help me? Thanks.

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • C# Proposal: Compile Time Static Checking Of Dynamic Objects

    - by Paulo Morgado
    C# 4.0 introduces a new type: dynamic. dynamic is a static type that bypasses static type checking. This new type comes in very handy to work with: The new languages from the dynamic language runtime. HTML Document Object Model (DOM). COM objects. Duck typing … Because static type checking is bypassed, this: dynamic dynamicValue = GetValue(); dynamicValue.Method(); is equivalent to this: object objectValue = GetValue(); objectValue .GetType() .InvokeMember( "Method", BindingFlags.InvokeMethod, null, objectValue, null); Apart from caching the call site behind the scenes and some dynamic resolution, dynamic only looks better. Any typing error will only be caught at run time. In fact, if I’m writing the code, I know the contract of what I’m calling. Wouldn’t it be nice to have the compiler do some static type checking on the interactions with these dynamic objects? Imagine that the dynamic object that I’m retrieving from the GetValue method, besides the parameterless method Method also has a string read-only Property property. This means that, from the point of view of the code I’m writing, the contract that the dynamic object returned by GetValue implements is: string Property { get; } void Method(); Since it’s a well defined contract, I could write an interface to represent it: interface IValue { string Property { get; } void Method(); } If dynamic allowed to specify the contract in the form of dynamic(contract), I could write this: dynamic(IValue) dynamicValue = GetValue(); dynamicValue.Method(); This doesn’t mean that the value returned by GetValue has to implement the IValue interface. It just enables the compiler to verify that dynamicValue.Method() is a valid use of dynamicValue and dynamicValue.OtherMethod() isn’t. If the IValue interface already existed for any other reason, this would be fine. But having a type added to an assembly just for compile time usage doesn’t seem right. So, dynamic could be another type construct. Something like this: dynamic DValue { string Property { get; } void Method(); } The code could now be written like this; DValue dynamicValue = GetValue(); dynamicValue.Method(); The compiler would never generate any IL or metadata for this new type construct. It would only thee used for compile type static checking of dynamic objects. As a consequence, it makes no sense to have public accessibility, so it would not be allowed. Once again, if the IValue interface (or any other type definition) already exists, it can be used in the dynamic type definition: dynamic DValue : IValue, IEnumerable, SomeClass { string Property { get; } void Method(); } Another added benefit would be IntelliSense. I’ve been getting mixed reactions to this proposal. What do you think? Would this be useful?

    Read the article

  • ESB Toolkit 2.0 EndPointConfig (HTTPS with WCF-BasicHttp and the ESB Toolkit 2.0)

    - by Andy Morrison
    Earlier this week I had an ESB endpoint (Off-Ramp in ESB parlance) that I was sending to over http using WCF-BasicHttp.  I needed to switch the protocol to https: which I did by changing my UDDI Binding over to https:  No problem from a management perspective; however, when I tried to run the process I saw this exception: Event Type:                     Error Event Source:                BizTalk Server 2009 Event Category:            BizTalk Server 2009 Event ID:   5754 Date:                                    3/10/2010 Time:                                   2:58:23 PM User:                                    N/A Computer:                       XXXXXXXXX Description: A message sent to adapter "WCF-BasicHttp" on send port "SPDynamic.XXX.SR" with URI "https://XXXXXXXXX.com/XXXXXXX/whatever.asmx" is suspended.  Error details: System.ArgumentException: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via    at System.ServiceModel.Channels.TransportChannelFactory`1.ValidateScheme(Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.ValidateCreateChannelParameters(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.OnCreateChannel(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.InternalCreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.ServiceChannelFactoryOverRequest.CreateInnerChannelBinder(EndpointAddress to, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateServiceChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel(Type channelType, EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel()    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)  MessageId:  {1170F4ED-550F-4F7E-B0E0-1EE92A25AB10}  InstanceID: {1640C6C6-CA9C-4746-AEB0-584FDF7BB61E} I knew from a previous experience that I likely needed to set the SecurityMode setting for my Send Port.  But how do you do this for a Dynamic port (which I was using since this is an ESB solution)? Within the UDDI portal you have to add an additional Instance Info to your Binding named: EndPointConfig  Then you have to set its value to:  SecurityMode=Transport Like this:    The EndPointConfig is how the ESB Toolkit 2.0 provides extensibility for the various transports.  To see what the key-value pair options are for a given transport, open up an itinerary and change one of your resolvers to a “static” resolver by setting the “Resolver Implementation” to Static.  Then select a “Transport Name” ”, for instance to WCF-BasicHttp.  At this point you can then click on the “EndPoint Configuration” property for to see an adapter/ramp specific properties dialog (key-value pairs.)    Here’s the dialog that popped up for WCF-BasicHttp:   I simply set the SecurityMode to Transport.  Please note that you will get different properties within the window depending on the Transport Name you select for the resolver. When you are done with your settings, export the itinerary to disk and find that xml; then find that resolver’s xml within that file.  It will look like endpointConfig=SecurityMode=Transport in this case.  Note that if you set additional properties you will have additional key-value pairs after endpointConfig= Copy that string and paste it into the UDDI portal for you Binding’s EndPointConfig Instance Info value.

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Don&rsquo;t Forget! In-Memory Databases are Hot

    - by andrewbrust
    If you’re left scratching your head over SAP’s intention to acquire Sybase for almost $6 million, you’re not alone.  Despite Sybase’s 1990s reign as the supreme database standard in certain sectors (including Wall Street), the company’s flagship product has certainly fallen from grace.  Why would SAP pay a greater than 50% premium over Sybase’s closing price on the day of the announcement just to acquire a relational database which is firmly stuck in maintenance mode? Well there’s more to Sybase than the relational database product.  Take, for example, its mobile application platform.  It hit Gartner’s “Leaders’ Quadrant” in January of last year, and SAP needs a good mobile play.  Beyond the platform itself, Sybase has a slew of mobile services; click this link to look them over. There’s a second major asset that Sybase has though, and I wonder if it figured prominently into SAP’s bid: Sybase IQ.  Sybase IQ is a columnar database.  Columnar databases place values from a given database column contiguously, unlike conventional relational databases, which store all of a row’s data in close proximity.  Storing column values together works well in aggregation reporting scenarios, because the figures to be aggregated can be scanned in one efficient step.  It also makes for high rates of compression because values from a single column tend to be close to each other in magnitude and may contain long sequences of repeating values.  Highly compressible databases use much less disk storage and can be largely or wholly loaded into memory, resulting in lighting fast query performance.  For an ERP company like SAP, with its own legacy BI platform (SAP BW) and the entire range of Business Objects and Crystal Reports BI products (which it acquired in 2007) query performance is extremely important. And it’s a competitive necessity too.  QlikTech has built an entire company on a columnar, in-memory BI product (QlikView).  So too has startup company Vertica.  IBM’s TM1 product has been doing in-memory OLAP for years.  And guess who else has the in-memory religion?  Microsoft does, in the form of its new PowerPivot product.  I expect the technology in PowerPivot to become strategic to the full-blown SQL Server Analysis Services product and the entire Microsoft BI stack.  I sure don’t blame SAP for jumping on the in-memory bandwagon, if indeed the Sybase acquisition is, at least in part, motivated by that. It will be interesting to watch and see what SAP does with Sybase’s product line-up (assuming the acquisition closes), including the core database, the mobile platform, IQ, and even tools like PowerBuilder.  It is also fascinating to watch columnar’s encroachment on relational.  Perhaps this acquisition will be columnar’s tipping point and people will no longer see it as a fad.  Are you listening Larry Ellison?

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • An increase to 3 Gig of RAM slows down Ubuntu 10.04 LTS

    - by williepabon
    I have Ubuntu 10.04 running from an external hard drive (installed on an enclosure) connected via USB port. Like a month or so ago, I increased RAM on my pc from 2 Gigs to 3 Gigs. This resulted on extremely long boot times and slow application loads. While I was understanding the nature of my problem, I posted various threads on this forum ( Questions # 188417, 188801), where I was advised to gather speed tests, and other info on my machine. I was also suggested that I might have problems with the RAM installed. Initially, I did not consider that possibility because: 1) I did a memory test with a diagnostic program from DELL (My pc is from Dell) 2) My pc works fine with Windows XP (the default OS), no problems with memory 3) My pc works fine when booting with Ubuntu 10.10 memory stick, no speed problems 4) My pc works fine when booting with Ubuntu 11.10 memory stick, no speed problems Anyway, I performed the memory tests suggested. But before doing it, and to check out any possibility of hardware issues on the hard drive, I did the following: (1) purchased a new hard drive enclosure and moved my hard drive to it, (2) purchased a new USB cable and used it to connect my hard drive/enclosure setup to a different USB port on my pc. Then, I performed speed tests with 1 Gig, 2 Gigs and 3 Gigs of RAM with my Ubuntu 10.04 OS. Ubuntu 10.04 worked well when booted with 1 Gig or 2 Gigs of RAM. When I increased to 3 Gigs, it slowed down to a crawl. I can't understand the relationship between an increase of 1 Gig and the effect it has in Ubuntu 10.04. This doesn't happen with Ubuntu 10.10 and 11.10. Unfortunately for me, Ubuntu 10.04 is my principal work operating system. So, I need a solution for this. Hardware and system information: DELL Precision 670 2 internal SATA Hard drives Audigy 2 ZS audio system Factory OS: Windows XP Professional SP3 NVidia 8400 GTS video card More info: williepabon@WP-WrkStation:~$ uname -a Linux WP-WrkStation 2.6.32-38-generic #83-Ubuntu SMP Wed Jan 4 11:13:04 UTC 2012 i686 GNU/Linux williepabon@WP-WrkStation:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.4 LTS Release: 10.04 Codename: lucid Speed test with the 3 Gigs of RAM installed: williepabon@WP-WrkStation:~$ sudo hdparm -tT /dev/sdc [sudo] password for williepabon: /dev/sdc: Timing cached reads: 84 MB in 2.00 seconds = 41.96 MB/sec Timing buffered disk reads: 4 MB in 3.81 seconds = 1.05 MB/sec This is a very slow transfer rate from a hard drive. I will really appreciate a solution or a work around for this problem. I know that that there are users that have Ubuntu 10.04 with 3 Gigs or more of RAM and they don't have this problem. Same question asked on Launchpad for reference.

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code! Steve WellensCodeProject

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code!   Steve WellensCodeProject

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Software development stack 2012

    A couple of months ago, I posted on Google+ about my evaluation period for a new software development stack in general. "Analysing existing 'jungle' of multiple applications and tools in various languages for clarification and future design decisions. Great fun and lots of headaches... #DevelopersLife" Surprisingly, there was response... ;-) - And this series of articles is initiated by this post. Thanks Olaf. The past few years... Well, after all my first choice of software development in the past was Microsoft Visual FoxPro 6.0 - 9.0 in combination with Microsoft SQL Server 2000 - 2008 and Crystal Reports 9.x - XI. Honestly, it is my main working environment due to exisiting maintenance and support plans with my customers, but also for new project requests. And... hands on, it is still my first choice for data manipulation and migration options. But the earth is spinning, and as a software craftsman one has to be flexible with the choice of tools. In parallel to my knowledge and expertise in the above mentioned tools, I already started very early to get my hands dirty with the Microsoft .NET Framework. If I remember correctly, I started back in 2002/2003 with the first version ever. But this was more out of curiousity. During the years this kind of development got more serious and demanding, and I focused myself on interop and integrational libraries and applications. Mainly, to expose exisitng features of the .NET Framework to Visual FoxPro - I even had a session about that at the German Developer's Conference in Frankfurt. Observation of recent developments With the recent hype on Javascript and HTML5, especially for Windows 8 and Windows Phone 8 development, I had several 'Deja vu' events... Back in early 2006 (roughly) I had a conversation on the future of Web and Desktop development with my former colleagues Golo Roden and Thomas Wilting about the underestimation of Javascript and its root as a prototype-based, dynamic, full-featured programming language. During this talk with them I took the Mozilla applications, namely Firefox and Thunderbird, as a reference which are mainly based on XML, CSS, Javascript and images - besides the core rendering engine. And that it is very simple to write your own extensions for the Gecko rendering engine. Looking at the Windows Vista Sidebar widgets, just underlines this kind of usage. So, yes the 'Modern UI' of Windows 8 based on HTML5, CSS3 and Javascript didn't come as any surprise to me. Just allow me to ask why did it take so long for Microsoft to come up with this step? A new set of tools Ok, coming from web development in HTML 4, CSS and Javascript prior to Visual FoxPro, I am partly going back to that combination of technologies. What is the other part of the software development stack here at IOS Indian Ocean Software Ltd? Frankly, it is easy and straight forward to describe: Microsoft Visual FoxPro 9.0 SP 2 - still going strong! Visual Studio 2012 (C# on latest .NET Framework) MonoDevelop Telerik DevCraft Suite WPF ASP.NET MVC Windows 8 Kendo UI OpenAccess ORM Reporting JustCode CODE Framework by EPS Software MonoTouch and Mono for Android Subversion and additional tools for the daily routine: Notepad++, JustCode, SQL Compare, DiffMerge, VMware, etc. Following the principles of Clean Code Developer and the Agile Manifesto Actually, nothing special about this combination but rather a solid fundament to work with and create line of business applications for customers.Honestly, I am really interested in your choice of 'weapons' for software development, and hopefully there might be some nice conversations in the comment section. Over the next coming days/weeks I'm going to describe a little bit more in detail about the reasons for my decision. Articles will be added bit by bit here as reference, too. Please bear with me... Regards, JoKi

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Cost justification for buying a 32GB superfast Alienware M18x with a price tag of around £5K ($10K)

    - by tonyrogerson
    When considering buying a laptop that’s going to cost me around £5,000 I really need to justify the purchase from a business perspective; my Lenovo W700 has served me very well for the last 2 years, it’s an extremely good machine and as solid as a rock (and as heavy), alas though it is limited to the 8GB. As SQL Server 2012 approaches and with my interest in working in the Business Intelligence space over the next year or two it is clear I need a powerful machine that I can run a full infrastructure though virtualised. My requirements For High Availability / Disaster Recovery research and demonstration Machine for a domain controller Four machines in a shared disk cluster (SQL Server Clustering active – active etc.) Five  machines in a file share cluster (SQL Server Availability Groups) For Business Intelligence research and demonstration Not entirely sure how many machine I want to run here, but it would be to cover the entire BI stack in an enterprise setting, sharepoint, sql server etc. For Big Data Research I have a fondness for the NoSQL approach to scalability and dealing with large volumes so I need a number of machines to research VoltDB, Hadoop etc. As you can see the requirements for a SQL Server consultant to service their clients well is considerable; will 8GB suffice, alas no, it will no longer do. I’m a very strong believer that in order to do your job well you must expense it, short cuts only cost you time, waiting 5 minutes instead of an hour for something to run not only saves me time but my clients time, I can do things quicker and more importantly I can demonstrate concepts. My W700 with the 8GB of RAM and SSD’s cost me around £3.5K two years ago, to be honest I’ve not got the full use I wanted out of it but the machine has had the power when I’ve needed it, it’s served me and my clients well. Alienware now do a model (the M18x) with 32GB of RAM; yes 32GB in a laptop! Dual drives so I can whack a couple of really good SSD’s in there, a quad core with hyper threading i7 and a decent speed. I can reduce the cost of the memory by getting it from Crucial, so instead of £1.5K for 32GB it will be around £900, I can also cost save on the SSD as well. The beauty about the M18x is that it is USB3.0, SATA 3 and also really importantly has eSATA, running VM’s will never be easier, I can have a removeable SSD with my VM’s on it and can plug it into my home machine or laptop – an ideal world! The initial outlay of £5K is peanuts compared to the benefits I’ll give my clients, I will be able to present real enterprise concepts, I’ll also be able to give training on those real enterprise concepts and with real, albeit virtualised machines.

    Read the article

  • Fusion CRM ISV program is gaining weight: Examples of certified add-on's

    - by Richard Lefebvre
    The Fusion CRM ISV program is gaining traction. Please find below few examples of the partners having certified their add-on's to seamlessly work on top of Oracle Fusion CRM. For more information, please contact [email protected] ·         Opportunity-to-Quote.  Big Machines now integrates seamlessly to Oracle Fusion CRM, enabling customers with complex products and services and multiple sales channels to streamline the entire opportunity-to-quote process, including product selection, configuration, pricing, quoting, and approval workflows.  Create a custom hyperlink in the Opportunity to invoke Big Machines CPQ application to create a quote and sync up with the Fusion CRM custom quote object using the CRUD operations. The quote can be updated using the custom button in the custom tab in the opportunity details. See: http://www.bigmachines.com/oracle.php  ·         SaaS Billing and Subscription Management.  Is your prospect/customer asking whether top billing partners support Fusion CRM?  Positioning an integrated CRM solution for billing usage and subscription based services?  Need to implement a billable solution on the Oracle Java Cloud Service?  Aria Systems and Zuora have recently engaged with Oracle to deepen their integrations to Fusion CRM and team with Oracle for joint opportunities.  ·         Google Apps, SharePoint, Email-CRM Integrations o   Do your prospects use Google Apps in their business operations?  A “Best of AppExchange” award winner recently completed their integration for Fusion CRM.  CirrusInsight plugs Fusion CRM web services directly into Gmail, allowing you to search existing opportunity or contact, provide account information, and create an interaction such as phone call, appointment, or email against a customer or contact in Fusion CRM directly from Gmail.  o   An EMEA / France based partner, Aryvart provides bi-directional synchronization of appointments and tasks between Google calendar and Oracle Fusion CRM. For customers, it means adopting Oracle Fusion CRM while continuing to use Google calendar for appointments. o   Looking to lower the barrier and expand in SharePoint accounts?  InFact Group (EMEA / France & Germany) provides Microsoft SharePoint Connector for Oracle Fusion CRM. With this solution, you can store documents attached to an opportunity, into Microsoft SharePoint repository. For customers, it means adopting Oracle Fusion CRM while continuing to collaborate across existing content management infrastructure. o   Need to connect to MacMail, GroupWise, or Outlook/Exchange?  Omni Technology is a partner whose Riva CRM Integration recently engaged for support Fusion CRM as a key platform. Migration Tools from competitive CRMs, to Oracle Fusion CRM.  Data Migration Tools from legacy CRMs, to Oracle Fusion CRM.  A partner with the tools and techniques to speed adoption, Conemis provides data integration tools to export data from legacy CRM, and import into Oracle Fusion CRM via WebServices APIs. For customers, it means reducing cost of data migration from legacy CRM system into Oracle Fusion CRM. 

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. The resets correlate with AC power connects and disconnects. I don't know if the warnings do. (How do I tell?) Are these bus CRC errors? Or something else? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16 :[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16 :[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16 :[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16 :[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16 :[ 0.813473] scsi0 : ahci 02:37:16 :[ 0.823340] scsi1 : ahci 02:37:16 :[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16 :[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16 :[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16 :[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16 :[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.229570] ata2.00: configured for UDMA/133 02:37:16 :[ 1.240133] ata2: hard resetting link 02:37:16 :[ 1.260738] ata1.00: configured for UDMA/133 02:37:16 :[ 1.280122] ata1: hard resetting link 02:37:16 :[ 1.470125] usb 2-5: new high speed USB device using ehci_hcd and address 3 02:37:16 :[ 1.550165] firewire_core: created device fw0: GUID 58b035fffea99f5c, S800 02:37:16 :[ 1.631306] Initializing USB Mass Storage driver... 02:37:16 :[ 1.631392] scsi6 : usb-storage 2-5:1.0 02:37:16 :[ 1.631454] usbcore: registered new interface driver usb-storage 02:37:16 :[ 1.631455] USB Mass Storage support registered. 02:37:16 :[ 1.960128] usb 4-1: new full speed USB device using ohci_hcd and address 2 02:37:16 :[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.994215] ata2.00: configured for UDMA/133 02:37:16 :[ 1.994220] ata2: EH complete 02:37:16 :[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 2.090773] ata1.00: configured for UDMA/133 02:37:16 :[ 2.090778] ata1: EH complete 02:37:16 :[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16 :[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16 :[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16 :[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16 :[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16 :[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16 :[ 2.091337] sda: 02:37:16 :[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16 :[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16 :[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091756] sdb: sda1 sda2 02:37:16 :[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16 :[ 2.093505] sdb1 sdb2 sdb3 02:37:16 :[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16 :[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16 :[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16 :[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25 :[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25 :[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25 :[ 19.641533] ata1: hard resetting link 02:37:25 :[ 19.642084] ata2: hard resetting link 02:37:26 :[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.396697] ata2.00: configured for UDMA/133 02:37:26 :[ 20.396703] ata2: EH complete 02:37:26 :[ 20.451491] ata1.00: configured for UDMA/133 02:37:26 :[ 20.451498] ata1: EH complete 02:37:30 :[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30 :[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30 :[ 24.627246] ata1: hard resetting link 02:37:30 :[ 24.632250] ata2: hard resetting link 02:37:31 :[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.386782] ata2.00: configured for UDMA/133 02:37:31 :[ 25.386788] ata2: EH complete 02:37:31 :[ 25.431668] ata1.00: configured for UDMA/133 02:37:31 :[ 25.431674] ata1: EH complete 02:45:54 :[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55 :[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55 :[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622583] ata1: hard resetting link 02:45:55 :[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622624] ata2: hard resetting link 02:45:56 :[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.384305] ata2.00: configured for UDMA/133 02:45:56 :[ 530.384314] ata2: EH complete 02:45:56 :[ 530.399225] ata1.00: configured for UDMA/133 02:45:56 :[ 530.399233] ata1: EH complete 02:45:58 :[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58 :[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58 :[ 532.590983] ata1: hard resetting link 02:45:58 :[ 532.591045] ata2: hard resetting link 02:45:59 :[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.344416] ata2.00: configured for UDMA/133 02:45:59 :[ 533.344424] ata2: EH complete 02:45:59 :[ 533.360839] ata1.00: configured for UDMA/133 02:45:59 :[ 533.360847] ata1: EH complete 02:45:59 :[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59 :[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59 :[ 533.660132] ata2: hard resetting link 02:45:59 :[ 533.660151] ata1: hard resetting link 02:46:00 :[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.416768] ata2.00: configured for UDMA/133 02:46:00 :[ 534.416777] ata2: EH complete 02:46:00 :[ 534.431396] ata1.00: configured for UDMA/133 02:46:00 :[ 534.431401] ata1: EH complete 02:46:03 :[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03 :[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03 :[ 537.586002] ata1: hard resetting link 02:46:03 :[ 537.586036] ata2: hard resetting link 02:46:04 :[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.334389] ata2.00: configured for UDMA/133 02:46:04 :[ 538.334398] ata2: EH complete 02:46:04 :[ 538.343511] ata1.00: configured for UDMA/133 02:46:04 :[ 538.343519] ata1: EH complete 02:46:04 :[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04 :[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04 :[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540159] ata1: hard resetting link 02:46:04 :[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540220] ata2: hard resetting link 02:46:05 :[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.294100] ata2.00: configured for UDMA/100 02:46:05 :[ 539.294106] ata2: EH complete 02:46:05 :[ 539.314125] ata1.00: configured for UDMA/100 02:46:05 :[ 539.314132] ------------[ cut here ]------------ 02:46:05 :[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05 :[ 539.314144] Hardware name: MacBookPro5,3 02:46:05 :[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05 :[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05 :[ 539.314224] Call Trace: 02:46:05 :[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05 :[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05 :[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05 :[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05 :[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05 :[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05 :[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05 :[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05 :[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05 :[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05 :[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05 :[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05 :[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05 :[ 539.314296] ata1: EH complete 02:46:12 :[ 547.040117] ata1: hard resetting link 02:46:13 :[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13 :[ 547.408430] ata1.00: configured for UDMA/100 02:46:13 :[ 547.408438] ------------[ cut here ]------------ 02:46:13 :[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13 :[ 547.408451] Hardware name: MacBookPro5,3 02:46:13 :[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13 :[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13 :[ 547.408531] Call Trace: 02:46:13 :[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13 :[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13 :[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13 :[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13 :[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13 :[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13 :[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13 :[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13 :[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13 :[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13 :[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13 :[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13 :[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13 :[ 547.408620] ata1: EH complete 02:46:13 :[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13 :[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13 :[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13 :[ 547.738218] ata1: hard resetting link 02:46:13 :[ 547.738274] ata2: hard resetting link 02:46:14 :[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.486809] ata2.00: configured for UDMA/100 02:46:14 :[ 548.486818] ata2: EH complete 02:46:14 :[ 548.498998] ata1.00: configured for UDMA/33 02:46:14 :[ 548.499004] ata1: EH complete 02:46:18 :[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18 :[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18 :[ 552.529684] ata1: hard resetting link 02:46:18 :[ 552.529723] ata2: hard resetting link 02:46:19 :[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.284141] ata2.00: configured for UDMA/100 02:46:19 :[ 553.284150] ata2: EH complete 02:46:19 :[ 553.301629] ata1.00: configured for UDMA/33 02:46:19 :[ 553.301637] ata1: EH complete 02:46:21 :[ 556.078830] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:21 :[ 556.180361] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:22 :[ 556.262612] ata1: hard resetting link 02:46:22 :[ 556.262617] ata2: hard resetting link 02:46:22 :[ 557.010050] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.010070] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.014069] ata2.00: configured for UDMA/100 02:46:22 :[ 557.014075] ata2: EH complete 02:46:22 :[ 557.023646] ata1.00: configured for UDMA/33 02:46:22 :[ 557.023654] ata1: EH complete 02:46:30 :[ 565.047438] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:30 :[ 565.051554] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:30 :[ 565.108332] ata1: hard resetting link 02:46:30 :[ 565.108389] ata2.00: limiting speed to UDMA/33:PIO4 02:46:30 :[ 565.108406] ata2: hard resetting link 02:46:31 :[ 565.850048] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.850068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.854304] ata2.00: configured for UDMA/33 02:46:31 :[ 565.854313] ata2: EH complete 02:46:31 :[ 565.868477] ata1.00: configured for UDMA/33 02:46:31 :[ 565.868485] ata1: EH complete 02:46:35 :[ 569.265469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:35 :[ 569.268139] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:35 :[ 569.340079] ata1: hard resetting link 02:46:35 :[ 569.340113] ata2: hard resetting link 02:46:35 :[ 570.092568] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.092589] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.096828] ata2.00: configured for UDMA/33 02:46:35 :[ 570.096837] ata2: EH complete 02:46:35 :[ 570.110727] ata1.00: configured for UDMA/33 02:46:35 :[ 570.110735] ata1: EH complete 02:47:04 :[ 598.528232] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:47:04 :[ 598.653973] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:47:04 :[ 598.730854] ata1: hard resetting link 02:47:04 :[ 598.730910] ata2: hard resetting link 02:47:05 :[ 599.480136] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.480159] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.484206] ata2.00: configured for UDMA/33 02:47:05 :[ 599.484213] ata2: EH complete 02:47:05 :[ 599.496699] ata1.00: configured for UDMA/33 02:47:05 :[ 599.496707] ata1: EH complete 04:45:59 :[ 7733.756548] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 04:45:59 :[ 7733.882748] EXT4-fs (dm-2): re-mounted. Opts: commit=0 04:45:59 :[ 7733.960142] ata1: hard resetting link 04:45:59 :[ 7733.960189] ata2: hard resetting link 04:46:00 :[ 7734.701926] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.719939] ata1.00: configured for UDMA/33 04:46:00 :[ 7734.719946] ata1: EH complete 04:46:00 :[ 7734.722547] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.726652] ata2.00: configured for UDMA/33 04:46:00 :[ 7734.726659] ata2: EH complete 04:46:02 :[ 7736.656465] ACPI: EC: GPE storm detected, transactions will use polling mode 13:38:49 :[39704.188621] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 13:38:49 :[39704.280588] EXT4-fs (dm-2): re-mounted. Opts: commit=600 13:38:49 :[39704.360819] ata1: hard resetting link 13:38:49 :[39704.360882] ata2: hard resetting link 13:38:50 :[39705.112956] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.114435] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.118673] ata2.00: configured for UDMA/33 13:38:50 :[39705.118682] ata2: EH complete 13:38:50 :[39705.127076] ata1.00: configured for UDMA/33 13:38:50 :[39705.127084] ata1: EH complete 13:39:49 :[39764.142463] applesmc: F1Mn: write arg fail 13:48:11 :[40267.025145] applesmc: FS! : read arg fail 13:52:53 :[40548.596735] applesmc: FS! : read arg fail 13:53:58 :[40613.972856] applesmc: FS! : read arg fail 13:54:08 :[40624.057339] applesmc: FS! : read arg fail 13:58:20 :[40875.397749] applesmc: TC0D: read data fail 14:16:56 :[41991.722054] applesmc: Th2H: read data fail 14:22:32 :[42327.991522] applesmc: light sensor data length set to 10 14:26:19 :[42554.788886] applesmc: F1Mn: write arg fail 14:32:36 :[42931.860443] applesmc: TC0F: read data fail 14:34:32 :[43048.041469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 14:34:33 :[43048.185850] EXT4-fs (dm-2): re-mounted. Opts: commit=0 14:34:33 :[43048.270184] ata1: hard resetting link 14:34:33 :[43048.270224] ata2: hard resetting link 14:34:33 :[43049.030049] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.030065] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.034106] ata2.00: configured for UDMA/33 14:34:33 :[43049.034112] ata2: EH complete 14:34:33 :[43049.056952] ata1.00: configured for UDMA/33 14:34:33 :[43049.056959] ------------[ cut here ]------------ 14:34:33 :[43049.056968] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:33 :[43049.056971] Hardware name: MacBookPro5,3 14:34:33 :[43049.056973] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:33 :[43049.057048] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:33 :[43049.057052] Call Trace: 14:34:33 :[43049.057060] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:33 :[43049.057064] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:33 :[43049.057069] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:33 :[43049.057074] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:33 :[43049.057083] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:33 :[43049.057088] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:33 :[43049.057093] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:33 :[43049.057097] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:33 :[43049.057102] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:33 :[43049.057107] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:33 :[43049.057111] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:33 :[43049.057115] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:33 :[43049.057118] ---[ end trace 76dbffc2d5d49d9d ]--- 14:34:33 :[43049.057123] ata1: EH complete 14:34:41 :[43057.012698] ata1: hard resetting link 14:34:42 :[43057.362780] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:42 :[43057.381432] ata1.00: configured for UDMA/33 14:34:42 :[43057.381441] ------------[ cut here ]------------ 14:34:42 :[43057.381450] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:42 :[43057.381453] Hardware name: MacBookPro5,3 14:34:42 :[43057.381455] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:42 :[43057.381530] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:42 :[43057.381533] Call Trace: 14:34:42 :[43057.381542] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:42 :[43057.381546] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:42 :[43057.381551] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:42 :[43057.381556] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:42 :[43057.381565] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:42 :[43057.381569] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:42 :[43057.381575] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:42 :[43057.381579] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:42 :[43057.381584] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:42 :[43057.381589] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:42 :[43057.381594] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:42 :[43057.381598] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:42 :[43057.381601] ---[ end trace 76dbffc2d5d49d9e ]--- 14:34:42 :[43057.381624] ata1: EH complete 14:34:42 :[43057.557887] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:42 :[43057.560517] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:42 :[43057.621194] ata1: hard resetting link 14:34:42 :[43057.621252] ata2: hard resetting link 14:34:43 :[43058.370141] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.370162] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.374407] ata2.00: configured for UDMA/33 14:34:43 :[43058.374415] ata2: EH complete 14:34:43 :[43058.381989] ata1.00: configured for UDMA/33 14:34:43 :[43058.381996] ata1: EH complete 14:34:43 :[43058.616228] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:43 :[43058.618931] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:43 :[43058.626687] ata1: hard resetting link 14:34:43 :[43058.626731] ata2: hard resetting link 14:34:44 :[43059.372908] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.372932] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.376997] ata2.00: configured for UDMA/33 14:34:44 :[43059.377003] ata2: EH complete 14:34:44 :[43059.392576] ata1.00: configured for UDMA/33 14:34:44 :[43059.392585] ata1: EH complete 15:48:19 :[47474.710860] ata1: hard resetting link 15:48:19 :[47474.710882] ata2: hard resetting link 15:48:20 :[47475.460144] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.460169] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.473709] ata1.00: configured for UDMA/33 15:48:20 :[47475.473717] ata1: EH complete 15:48:20 :[47475.727960] ata2.00: configured for UDMA/33 15:48:20 :[47475.727969] ata2: EH complete 16:29:39 :[49954.295017] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 16:29:39 :[49954.622307] EXT4-fs (dm-2): re-mounted. Opts: commit=0 16:29:39 :[49954.710139] ata1: hard resetting link 16:29:39 :[49954.710174] ata2: hard resetting link 16:29:40 :[49955.460046] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.460062] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.464138] ata2.00: configured for UDMA/33 16:29:40 :[49955.464144] ata2: EH complete 16:29:40 :[49955.473251] ata1.00: configured for UDMA/33 16:29:40 :[49955.473258] ata1: EH complete

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening?

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This will be a 5 day blog post in getting started with SSRS. Today will show the importance of SSRS in the business. Why is SSRS asked for in so many job openings? If you talk to an SSRS expert it’s very clear to them exactly why companies really need this invention and how it saves time and adds business value. You don’t have to be an SSRS expert to know its value or to start using it. For example you don’t have to be an airline pilot to know the usefulness of modern transportation. Even the people who don’t know how to run SSRS but need the reports can tell you why that is needed. This blog post will go into why SSRS is an important invention by showing how it improves the usage of information in your company. Before SSRS there has always been a need for a company to benefit from the use of its own information. Excel spreadsheets have been a popular way to do this for a long time. With SSRS you can still use this solution and gain many other options too. A friend of mine told me a story about doing database work in the 90s for a major company and how he wished SSRS was available back then. The Vice President of the marketing channel would often come to him just before an important meeting with the board of directors. He often needed to show how certain product sales were performing over time. All this information was in the database so it was my friend’s job to get the information out and organized into a medium the VP could use. This medium was usually Excel. The VP often had meetings all over the world where he showcased this Excel report. The solution to get the VP to him anywhere he was in the world was an Excel file attached to an e-mail. This worked pretty well but with some drawbacks. One time my friend sent the wrong file in the e-mail. A few minutes later my friend realized his mistake and sent another frantic e-mail to VP. This one was saying to ignore the last e-mail and use this newer one. Would the VP see the correct e-mail in time? If SSRS had been available, my friend could have created a solution that let the VP run the report any time he wished. The report could have been published to the company intranet where the VP could run it from any of the offices he happened to be traveling to that month. There is a fair amount of work up front to develop and publish the report, but once that work is completed, the report can be reused as many times as needed. My friend could even be on vacation for the first day of the monthly and the VP can get his real-time report. Not only could the report show the most recent data, the VP could choose to view reports of previous months with just a few clicks. The deployed SSRS is user friendly, and can also be configured to protect reports from being run by the wrong people. Tomorrow’s Post Tomorrow’s blog post will show how to know if you already have SSRS installed. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Ask the Readers: Do You Prefer Computers, Game Consoles, or Other Devices for Your Gaming Needs?

    - by Asian Angel
    Nearly everyone who has access to a computer will play games on it at some point, but many people also use a separate game platform as well. What we would like to know this week is if you prefer using a computer, game consoles, or other devices for your gaming needs. Photo of Faith and Kate Connors from Mirror’s Edge by Tamahikari Tammas. Video games are a perfect way to relax and have fun at home (or at work if you can sneak in some game time!). The increasing variety of devices available with each passing year are making it easier to have access to a gaming platform to suit your needs or “darkest gaming desires”. For many people their computers are the perfect platform…they can play Flash-based games in their browsers, use the default set of games that come with their system, and install any extras that catch their eyes. The added benefit is that when game time is over they can drop right into their browsing, e-mail, personal projects, or work without having to switch hardware. The convenience of the “all-in-one” platform is certainly appealing! Perhaps you prefer to use your computer for other activities outside of gaming and own one or more separate game consoles. You might have chosen an Xbox, Playstation, or Nintendo for example. Maybe a hand-held is preferable for its’ size and portability. Then there are mobile phones and the iPad… With so many options it may feel hard to choose the right platform(s) without a good bit of research regarding display, availability of games for a particular platform, how long before the platform starts to become “obsolete”, etc. What we would like to know this week is which gaming platform you prefer. Is there only one that you choose to use or do you use multiple platforms for gaming? Is there a particular reason such as convenience for your choices? You may even be keeping an older platform around just for a certain game (or games) made for it. Are there any recommendations or advice that you would like to share with your fellow readers? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Apture Highlights Turns Your Cursor into a Search Tool Add Classic Sci-Fi Goodness to Your Desktop with the Matrix Theme for Windows 7 You Can’t Walk Straight without Visual Markers [Video] Lord of the Rings Movie Parody Double Feature [Video] Turn a Webpage into an Asteroids-Styled Shooting Game in Opera Dolphin Browser Mini Leaves Beta; Sports New GUI, Easy Bookmarking, and More

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • From the Tips Box: Pre-installation Prep Work Makes Service Pack Upgrades Smoother

    - by Jason Fitzpatrick
    Last month Microsoft rolled out Windows 7 Service Pack 1 and, like many SP releases, quite a few people are hanging back to see what happens. If you want to update but still error on the side of caution, reader Ron Troy  offers a step-by-step guide. Ron’s cautious approach does an excellent job minimizing the number of issues that could crop up in a Service Pack upgrade by doing a thorough job updating your driver sets and clearing out old junk before you roll out the update. Read on to see how he does it: Just wanted to pass on a suggestion for people worried about installing Service Packs.  I came up with a ‘method’ a couple years back that seems to work well. Run Windows / Microsoft Update to get all updates EXCEPT the Service Pack. Use Secunia PSI to find any other updates you need. Use CCleaner or the Windows disk cleanup tools to get rid of all the old garbage out there.  Make sure that you include old system updates. Obviously, back up anything you really care about.  An image backup can be real nice to have if things go wrong. Download the correct SP version from Microsoft.com; do not use Windows / Microsoft Update to get it.  Make sure you have the 64 bit version if that’s what you have installed on your PC. Make sure that EVERYTHING that affects the OS is up to date.  That includes all sorts of drivers, starting with video and audio.  And if you have an Intel chipset, use the Intel Driver Utility to update those drivers.  It’s very quick and easy.  For the video and audio drivers, some can be updated by Intel, some by utilities on the vendor web sites, and some you just have to figure out yourself.  But don’t be lazy here; old drivers and Windows Service Packs are a poor mix. If you have 3rd party software, check to see if they have any updates for you.  They might not say that they are for the Service Pack but you cut your risk of things not working if you do this. Shut off the Antivirus software (especially if 3rd party). Reboot, hitting F8 to get the SafeMode menu.  Choose SafeMode with Networking. Log into the Administrator account to ensure that you have the right to install the SP. Run the SP.  It won’t be very fancy this way.  Maybe 45 minutes later it will reboot and then finish configuring itself, finally letting you log in. Total installation time on most of my PC’s was about 1 hour but that followed hours of preparation on each. On a separate note, I recently got on the Nvidia web site and their utility told me I had a new driver available for my GeForce 8600M GS.  This laptop had come with Vista, now has Win 7 SP1.  I had a big surprise from this driver update; the Windows Experience Score on the graphics side went way up.  Kudo’s to Nvidia for doing a driver update that actually helps day to day usage.  And unlike ATI’s updates (which I need for my AGP based system), this update was fairly quick and very easy.  Also, Nvidia drivers have never, as I can recall, given me BSOD’s, many of which I’ve gotten from ATI (TDR errors).How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by simonc
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly. Cross posted from Simple Talk.

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by Simon Cooper
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly.

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >